US Treasury Report on Artificial Intelligence in Financial Services: Risks, Regulation and the Path Forward
Table of Contents
- Why the US Treasury Report on AI in Financial Services Matters
- AI Adoption in Financial Services: 78% of Firms Using Generative AI
- How Banks and Financial Institutions Are Using AI Today
- Data Privacy, Security and Quality Risks in AI-Powered Finance
- AI Bias, Explainability and the Hallucination Problem
- Consumer Protection Gaps in the Age of AI Financial Services
- Concentration and Systemic Risks From AI Provider Dependence
- Illicit Finance and AI-Powered Fraud: Deepfakes and Beyond
- Treasury Recommendations for AI Regulation in Finance
- What Financial Institutions Must Do Now to Prepare
📌 Key Takeaways
- Massive adoption underway: 78% of financial firms are already implementing generative AI, with 86% expecting significant model inventory growth in the near term
- $1 billion fraud detection: Treasury’s own AI deployment for check fraud detection recovered $1 billion in fraud and improper payments in 2024
- Six major risk categories: Data privacy gaps, bias and hallucinations, consumer protection weaknesses, concentration risks, third-party vulnerabilities, and illicit finance threats
- Regulatory gaps identified: Existing laws like GLBA and FCRA may leave consumer financial data less protected than other data types as AI adoption expands
- Concentration risk alarm: Financial system dependence on a handful of foundational AI models from few providers creates potential for widespread disruptions and systemic instability
Why the US Treasury Report on AI in Financial Services Matters
In December 2024, the US Department of the Treasury published one of the most comprehensive government assessments of artificial intelligence in financial services to date. Based on 103 comment letters received through a public Request for Information process, the report synthesizes perspectives from financial firms, consumer advocacy groups, technology providers, fintech companies, trade associations, and consulting firms—creating a panoramic view of how AI is transforming banking, insurance, and capital markets.
The significance of this report extends far beyond its 100+ pages. It represents the US government’s most detailed articulation of the opportunities, risks, and regulatory gaps created by AI adoption in the financial sector. Multiple Treasury offices contributed to the findings, including the Office of Financial Institutions Policy, the Federal Insurance Office, the Office of Capital Markets, the Office of Consumer Policy, and the Office of Cyber Security and Critical Infrastructure Protection.
The timing is critical. AI use in financial services has shifted dramatically in the past two years, moving from traditional machine learning models—statistical algorithms that classify and predict—to generative AI systems that create new content, analyze unstructured data, and interact directly with consumers. This shift brings transformative potential for efficiency and inclusion, but also introduces risks that existing regulatory frameworks were never designed to address.
AI Adoption in Financial Services: 78% of Firms Using Generative AI
The Treasury report paints a picture of rapid, widespread AI adoption across the financial sector. Nearly 8 in 10 financial firms (78%) are already implementing generative AI for at least one use case. This is not a future trend—it is current reality. Moreover, 86% of firms expect a significant or moderate increase in their AI model inventory due to generative AI adoption, and 37% indicated significant expansion in use cases ahead.
The top use cases reveal where financial institutions see the most immediate value. Enhancing risk and compliance leads at 32%, followed by improving client engagement at 26% and software development at 24%. Near-term generative AI applications concentrate in risk identification and assessment, code assistance, document querying and extraction, and financial crime detection including anti-money laundering (AML) systems.
The trajectory from traditional AI to generative AI represents a fundamental shift in capability and complexity. Traditional machine learning has been used in financial services since the 1940s—decades of statistical modeling for credit scoring, fraud detection, and risk management. Generative AI is qualitatively different: it creates new content based on training data, requires exponentially more parameters, demands significantly more computational power and financial investment, and introduces entirely new failure modes like hallucinations—confidently stated but incorrect outputs.
This adoption surge is happening simultaneously across institutions of vastly different sizes and sophistication levels, creating what several respondents described as an uneven competitive landscape. Larger institutions with massive data assets, engineering teams, and capital budgets can develop custom AI solutions, while smaller firms depend heavily on third-party AI providers—a dependency that creates its own set of systemic risks. For additional analysis of AI transformation across industries, explore our interactive library of AI research.
How Banks and Financial Institutions Are Using AI Today
The Treasury report catalogs both external consumer-facing and internal operational AI applications across the financial sector. In credit underwriting, machine learning now analyzes alternative data sources—rent payments, utility bills, and geolocation data—to evaluate creditworthiness. This has the potential to expand credit access for “credit invisible” consumers, including minorities and small businesses without traditional credit histories.
Customer service represents another major frontier. Payment providers use AI to analyze point-of-sale data for personalized recommendations, while natural language processing enables sentiment analysis, translation, and transcription at scale. Many of the largest banks have deployed chatbots, though some remain hesitant to use large language models in customer-facing applications due to liability and reputational concerns—a caution that the Treasury’s findings on hallucination risks suggest may be well-founded.
In investment and trading, AI-driven insights improve forecasting, automate execution, manage portfolio workflows, and assess risk-return tradeoffs. Robo-advisors offer personalized investment advisory services at a fraction of traditional advisory costs. The insurance industry uses AI across underwriting, claims processing, fraud detection, and catastrophic weather loss forecasting.
On the operational side, AI is transforming compliance and risk management. AML and sanctions compliance systems analyze large datasets to detect anomalies and flag suspicious activities. Back-office functions—recordkeeping, predictive texting, audio transcription, and document search—are being automated with generative AI. Treasury’s own experience validates this potential: the Office of Payment Integrity used machine learning for check fraud detection, resulting in $1 billion in recovered fraud and improper payments.
Transform dense regulatory reports like this Treasury publication into interactive experiences your compliance team will actually engage with.
Data Privacy, Security and Quality Risks in AI-Powered Finance
The Treasury identifies data-related risks as the foundation upon which all other AI risks rest. High-quality data—clean, complete, standardized, and comprehensive—is essential for training effective models, testing efficacy, and reducing bias. Yet the financial sector faces persistent challenges in data curation, particularly when generative AI models require vastly larger and more diverse training datasets than traditional models.
Data security concerns are amplified by AI’s architecture. When consumer data is transferred outside financial institutions for AI training and processing, enforcement of data security standards becomes significantly more difficult. The risk of “data poisoning”—where malicious actors corrupt training data after model deployment—introduces a novel attack vector that traditional cybersecurity frameworks may not adequately address.
Perhaps the most striking finding concerns data privacy gaps. The Gramm-Leach-Bliley Act (GLBA), which provides the primary federal framework for financial data privacy, uses an opt-out standard—meaning consumer data can be shared unless consumers actively object. Several respondents called for a shift to an opt-in standard, where explicit consent would be required before data sharing. The Consumer Financial Protection Bureau (CFPB) concluded that state-level data privacy laws, which are emerging across the country, often carve out data already covered by GLBA—potentially leaving consumer financial data less protected than other types of personal data.
Technical solutions are emerging to address some of these challenges. Homomorphic encryption allows data sharing without compromising encryption, while federated learning enables model training across distributed datasets without centralizing sensitive information. Ironically, AI models themselves may improve privacy violation detection beyond what hard-coded, rules-based systems can achieve.
AI Bias, Explainability and the Hallucination Problem
The Treasury report is candid about the bias challenge: AI models are “not as impartial as they may appear,” though they could produce less discriminatory results if properly developed. Training data that reflects historical patterns—including decades of racial redlining and gender-based lending discrimination—inevitably embeds those biases into AI outputs. User queries can also introduce bias, creating a dual-source problem that is particularly difficult to mitigate.
Explainability poses a fundamental technical challenge for generative AI in finance. Traditional AI models, while sometimes opaque, operate with manageable numbers of parameters. Generative AI models have exponentially more parameters, making it dramatically more difficult to explain which factors influenced a specific output. For credit decisions, investment recommendations, or insurance underwriting—all areas with legal requirements for explainability—this opacity creates genuine compliance risks.
The vendor relationship compounds the explainability problem. Financial firms using third-party AI models may not receive access to the information needed to assess risks and develop controls. Some respondents asserted that vendors should grant access to AI models and nonpublic impact assessments, while others raised concerns that excessive disclosures could create cybersecurity vulnerabilities or expose proprietary trade secrets.
Hallucinations—the phenomenon where generative AI models convincingly produce incorrect output—represent a risk category that is genuinely new to financial services. Unlike traditional model errors, which typically produce nonsensical or obviously wrong outputs, hallucinations can appear authoritative and well-reasoned while being factually incorrect. In a financial context, a hallucinated credit score, compliance assessment, or investment recommendation could have material consequences. The report notes that reducing hallucination frequency is a “key priority” among both AI developers and customers, but that it remains “challenging to pinpoint the source of errors generating hallucinations.”
Consumer Protection Gaps in the Age of AI Financial Services
The consumer protection analysis is among the report’s most impactful sections. The Treasury identifies multiple vectors through which AI could harm consumers: lack of transparency about data collection and model use, potential to steer consumers toward predatory products, digital redlining of communities, and inaccurate chatbot responses creating liability and reputational damage.
The data collection issue is particularly concerning. Financial institutions now collect vast amounts of data about consumers who may be entirely unaware of the scope and depth of this surveillance. The debate over opt-in versus opt-out approaches revealed deep disagreements, with some respondents arguing that consent is “meaningless if no comparable alternative is available”—a consumer cannot effectively opt out if every financial provider uses similar data collection practices.
Existing consumer protection laws show significant gaps when applied to AI. The Fair Credit Reporting Act (FCRA) does not apply to all consumer data or all data providers. The Equal Credit Opportunity Act (ECOA) and Fair Housing Act apply regardless of technology, but the CFPB noted that fair lending and consumer protection laws—including prohibitions on unfair, deceptive, or abusive acts and practices (UDAAP)—were not designed for the specific challenges of AI decision-making. Several respondents suggested extending anti-discrimination principles to products not currently covered, such as bank accounts and deposit accounts. Our interactive library features additional research on AI regulation and compliance.
One particularly alarming finding: a respondent noted that imprecise AI models for suspicious activity detection had led to an increase in improperly closed bank accounts—without appeal processes. This represents a concrete harm where AI deployment has already caused material damage to consumers, disproportionately affecting those least equipped to challenge institutional decisions.
Make regulatory compliance documents accessible to every stakeholder. Transform Treasury reports into interactive experiences your team will read.
Concentration and Systemic Risks From AI Provider Dependence
The Treasury’s analysis of concentration risk may have the most far-reaching implications for financial stability. The report highlights a structural vulnerability: numerous financial applications are built on only a handful of foundational AI models from a few providers. An interruption at a single AI provider could create widespread disruptions across the entire financial system.
This concentration creates several interconnected risks. First, model monoculture—where many institutions use identical or similar AI models—could produce correlated decision-making that amplifies market movements rather than diversifying them. Second, the possibility of AI-driven bank runs or systemic instabilities “may be more amplified in the future” as AI adoption deepens. Third, the interconnections between models and data, combined with lack of transparency, could result in more unpredictable herding behavior—precisely the kind of correlated risk that the Financial Stability Board was created to monitor after the 2008 crisis.
The competitive dynamics further compound the concentration problem. Generative AI requires vast training data, advanced computing power, and substantial financial investment—resources that only the largest technology companies possess. This creates natural monopoly tendencies in AI model provision, with a handful of firms (OpenAI, Google, Anthropic, Meta) serving as critical infrastructure for the entire financial sector. Unlike traditional technology vendors, AI model providers influence the actual decision-making logic of financial institutions—a qualitatively different type of dependency.
Several respondents proposed a “nutritional label” approach for AI disclosures—standardized information about model training, data sources, known limitations, and performance characteristics. Others suggested certification or accreditation programs for AI compliance. However, some warned that licensing requirements could paradoxically exacerbate concentration risks by raising barriers to entry for smaller AI providers.
Illicit Finance and AI-Powered Fraud: Deepfakes and Beyond
The illicit finance section of the Treasury report reads like a preview of the criminal innovation landscape for the coming decade. Criminals are already using AI for document and image manipulation, creating convincing deepfake images for identity fraud. AI tools generate persuasive text for fraudulent communications, social-engineer customer service agents, and produce malicious content including phishing websites and malware at unprecedented scale.
Generative AI could “supercharge” phishing attacks, the report warns, enabling personalized social engineering at scale that was previously impossible. The implications extend from individual consumer fraud to potential threats against financial system integrity. FinCEN issued a specific alert (FIN-2024-ALERT004) on fraud schemes involving deepfake media in November 2024, signaling the urgency of the threat.
The defensive applications of AI against illicit finance are equally significant, however. Robust digital identity solutions, including the FIDO authentication standard and passkeys tied to biometrics, combined with multi-factor authentication and AI-powered risk engines, represent the emerging toolkit against AI-enabled fraud. The cat-and-mouse dynamic between AI-powered attack and defense is likely to define the cybersecurity landscape for financial services over the coming years.
Treasury Recommendations for AI Regulation in Finance
The Treasury outlines five categories of potential next steps, carefully framed as recommendations rather than mandates. First, continued international and domestic collaboration—participation in G7, Financial Stability Board, and OECD working groups on AI, bilateral engagement with other jurisdictions, and coordination with the National Institute of Standards and Technology (NIST).
Second, addressing regulatory gaps and consumer harms through further analysis and stakeholder engagement. This includes evaluating how different supervision levels for banks and nonbanks impact AI usage, exploring whether FCRA, ECOA, and GLBA are sufficient for expanding AI use of consumer data, and clarifying expectations for assessing AI models for discriminatory effects. The report notably reaffirms the Financial Stability Oversight Council’s recommendation that Congress pass legislation ensuring adequate examination and enforcement powers over third-party service providers.
Third, enhancing risk management frameworks through continued coordination among financial regulators. This includes clarifying how the NIST AI Risk Management Framework fits within prudential risk-management expectations—a question that many respondents identified as a significant source of uncertainty.
Fourth, facilitating information sharing through public-private partnerships modeled on the “Cloud Executive Steering Group” launched in May 2023. This includes developing data standards, sharing risk management best practices, exploring ways to develop technology capabilities for smaller financial firms, and monitoring concentration risks associated with AI providers.
Fifth, ensuring financial firm compliance by prioritizing review of AI use cases before deployment and periodically reevaluating compliance as technology evolves. If deficiencies are observed, firms should take immediate action—updating policies, procedures, AI models, or switching providers entirely.
What Financial Institutions Must Do Now to Prepare
The Treasury report sends a clear message to every financial institution, regardless of size: the time to establish comprehensive AI governance is now, not after regulatory frameworks are finalized. The 78% adoption rate means that most institutions are already exposed to the risks identified in this report, whether they have formal AI governance programs or not.
For large institutions, the priorities are clear: establish model risk management frameworks that specifically address generative AI’s unique characteristics, audit third-party AI dependencies for concentration risk, and ensure consumer-facing AI applications comply with existing fair lending and consumer protection laws. The explainability challenge requires investment in interpretable AI techniques and audit trails that can satisfy both regulators and courts.
For smaller institutions, the challenge is different but equally urgent. Dependence on a small number of AI vendors creates concentration risk that individual firms cannot mitigate alone. The Treasury’s recommendation for public-private partnerships and industry-wide information sharing is particularly relevant for these institutions. Trade associations and industry groups will need to play an expanded role in developing shared resources, standards, and best practices that level the playing field.
For all institutions, the consumer protection implications demand immediate attention. AI-powered chatbots, credit decisions, and account monitoring systems must be tested for bias, accuracy, and transparency. The report’s finding about improperly closed accounts due to imprecise AI models should serve as a warning: the reputational and legal consequences of AI failures in consumer-facing applications can be severe and immediate.
The regulatory landscape will continue to evolve rapidly. With 31 states introducing 191 AI-related bills in 2023 alone and 14 becoming law, the patchwork of federal and state requirements is only becoming more complex. Financial institutions that proactively adopt comprehensive AI governance—aligned with both federal expectations and the most stringent state requirements—will be best positioned to navigate this evolving landscape. Our interactive library features additional compliance and regulatory research to help your institution stay ahead.
Transform your institution’s compliance training and regulatory documents into interactive experiences that drive real engagement and understanding.
Frequently Asked Questions
What did the US Treasury find about AI adoption in financial services?
The US Treasury report found that 78% of financial firms are implementing generative AI for at least one use case. Top applications include risk and compliance (32%), client engagement (26%), and software development (24%). The report also found 86% of firms expect significant increases in their AI model inventory.
What are the biggest risks of AI in banking and finance?
The Treasury identified six major risk categories: data privacy and security gaps, AI bias and hallucinations, consumer protection concerns, concentration risks from reliance on a few AI providers, third-party vendor risks, and illicit finance threats from deepfakes and AI-powered fraud schemes.
How does the Treasury recommend regulating AI in financial services?
The Treasury recommends five key actions: international and domestic collaboration on AI standards, addressing regulatory gaps and consumer harms, enhancing risk management frameworks, facilitating public-private information sharing, and ensuring financial firms prioritize compliance review before AI deployment.
What consumer protection risks does AI create in financial services?
Key risks include lack of transparency about data collection and AI model use, potential to steer consumers to predatory products, digital redlining of communities, inaccurate chatbot responses creating liability, and gaps in existing laws like GLBA and FCRA that may leave consumer financial data less protected than other data types.
How much fraud has Treasury detected using AI?
Treasury’s Office of Payment Integrity within the Bureau of the Fiscal Service used machine learning AI for Treasury check fraud detection, resulting in $1 billion in recovery of fraud and improper payments, as announced in October 2024.
What concentration risks does AI create for the financial system?
AI creates concentration risks because numerous financial applications rely on only a handful of foundational models from a few AI providers. An interruption at a single AI provider could create widespread disruptions across the financial system, potentially triggering AI-driven bank runs or systemic instabilities.