GAO AI Financial Services Report | Oversight Guide

📌 Key Takeaways

  • AI spans core financial activities: Financial institutions deploy AI across automated trading, credit decisions, customer service, risk management, fraud detection, and regulatory compliance functions.
  • Seven risk categories identified: The GAO report catalogs fair lending bias, investor protection, privacy, consumer protection, operational/cybersecurity, model risk, compliance risk, and concentration risk as key AI threats.
  • Existing frameworks apply broadly: Federal regulators primarily oversee AI through technology-neutral existing laws and risk-based examinations, with some AI-specific guidance emerging.
  • NCUA oversight gaps flagged: The GAO recommends NCUA update its limited model risk management guidance and highlights the agency’s lack of third-party oversight authority.
  • Regulators adopt AI themselves: Federal financial regulators are deploying AI internally to enhance supervisory activities, detect patterns in financial data, and improve examination efficiency.

GAO AI Financial Services Report: Scope and Significance

The U.S. Government Accountability Office’s report GAO-25-107197, published in May 2025, provides a comprehensive examination of artificial intelligence use and oversight in the financial services sector. This landmark GAO AI financial services assessment addresses three critical dimensions: how financial institutions are deploying AI, how federal regulators oversee these AI applications, and how regulators themselves are adopting AI tools to enhance their supervisory capabilities. The report arrives at a pivotal moment when AI adoption in finance is accelerating rapidly following the emergence of generative AI applications.

The GAO conducted extensive research including reviews of documentation from seven federal financial regulators — the Federal Reserve, OCC, FDIC, NCUA, SEC, CFTC, and CFPB — alongside interviews with industry groups, consumer advocacy organizations, research firms, depository institutions, and technology providers. The report’s findings carry particular weight given the GAO’s role as Congress’s investigative arm, providing authoritative analysis that directly informs legislative and regulatory decision-making on AI governance in financial services.

As financial institutions increasingly integrate AI into core operations, from automated trading systems to credit underwriting algorithms, the regulatory framework governing these technologies faces unprecedented challenges. The report’s identification of gaps in oversight, particularly at the National Credit Union Administration, signals areas where regulatory modernization is urgently needed. For a broader perspective on AI’s regulatory landscape, see our analysis of regulating AI in financial services.

How Financial Institutions Use Artificial Intelligence Today

Financial institutions have deployed artificial intelligence across a remarkably wide range of activities, creating both efficiency gains and new risk vectors that demand careful oversight. According to the GAO’s research, AI applications in finance span automated trading where algorithms execute orders based on market analysis, credit decisions including underwriting and loan pricing, customer service through AI-powered chatbots and virtual assistants, investment management via robo-advisers, and sophisticated risk management for credit and liquidity assessment.

Beyond front-office applications, AI plays an increasingly critical role in countering illicit finance and detecting fraud. Machine learning algorithms analyze transaction patterns to identify potential money laundering, insider trading, and synthetic identity fraud that human analysts may miss. The GAO notes that AI can reduce false positives in anti-money laundering alerts, which constitute the vast majority of flagged transactions, allowing compliance teams to focus on genuinely suspicious activities.

Two categories of AI are particularly relevant in financial services. Traditional machine learning programs that automatically improve performance through experience have been used for decades in quantitative modeling, credit scoring, and algorithmic trading. More recently, generative AI — capable of creating text, images, audio, and video content — has introduced new capabilities in document analysis, customer communication, and research synthesis. The distinction between these AI categories matters significantly for regulatory oversight, as generative AI introduces novel risks including hallucinations and expanded cybersecurity attack surfaces.

Benefits of AI in Financial Services for Consumers and Markets

The GAO report identifies several substantial benefits that AI delivers to consumers, investors, financial institutions, and broader financial markets. Lower costs represent one of the most immediate consumer benefits, as AI-powered robo-advisers provide investment advice at lower fees with smaller account minimums compared to traditional advisory services. Similarly, AI automation of credit underwriting enables credit unions and smaller lenders to deliver financial products more efficiently and affordably to underserved populations.

Enhanced efficiency and accuracy constitute another major advantage, with AI processing vast datasets faster and more consistently than manual methods. Financial institutions report improved customer experiences through AI chatbots that understand and respond to questions in natural language, with some credit unions using AI to personalize interactions by recommending frequently used services. Increased security through better detection of cyber threats and illicit finance represents a particularly valuable benefit, as AI identifies synthetic identity fraud cases that human analysts cannot easily detect.

Capital markets also benefit from AI-optimized trade execution, which can reduce price volatility from large trades by dynamically adjusting order size and timing based on real-time market conditions. The Organisation for Economic Cooperation and Development has highlighted these market stability benefits. Additionally, AI enhances compliance and risk management capabilities, helping financial institutions better manage regulatory requirements while the International Monetary Fund notes the potential for improved financial system resilience through AI-driven risk assessment.

Transform this 54-page GAO report into an interactive experience your compliance team will actually engage with.

Try It Free →

AI Risks in Financial Services: Fair Lending and Bias

The GAO report provides a detailed taxonomy of AI risks in financial services, with fair lending bias emerging as one of the most consequential concerns. AI models can perpetuate or amplify existing biases in credit decisions, leading to credit denials or higher-priced lending for borrowers in protected classes. Researchers have testified that some AI models can infer applicants’ race or gender from application data or create complex variable interactions that result in disproportionately negative effects on specific demographic groups.

The Financial Stability Oversight Council has warned that as AI models grow more complex, identifying and correcting biases becomes increasingly difficult. This complexity challenge is compounded by the “black box” nature of many advanced AI systems, where the reasoning behind specific credit decisions may be opaque even to the institutions deploying them. For lenders, this creates a tension between leveraging AI’s superior predictive capabilities and maintaining compliance with fair lending laws including the Equal Credit Opportunity Act and Fair Housing Act.

Investor protection represents another critical risk area. AI-powered investment platforms could exploit behavioral biases or generate conflicting advice that prioritizes institutional profits over client interests. The opacity of AI decision-making can obscure such conflicts, making it difficult for investors and regulators alike to detect when automated advice fails to meet fiduciary standards. Consumer advocates note that the line between personalized service and manipulative design can become dangerously thin in AI-driven financial advisory contexts, a concern also explored in our guide to AI transforming finance.

Privacy, Cybersecurity and Operational AI Risks

Privacy risks from AI in financial services extend well beyond traditional data protection concerns. The GAO report highlights that machine learning and generative AI models may leak sensitive data directly or enable inference attacks that deduce individual identities from anonymized datasets. Some financial institutions have responded by restricting employee access to publicly available generative AI applications, recognizing that data entered into these systems could be exposed or used for model training purposes.

Consumer protection risks from AI-powered customer-facing applications present unique challenges. The prudential regulators and CFPB have noted that AI can create or heighten risks of unfair, deceptive, or abusive acts or practices. Generative AI hallucinations — outputs that are false but convincingly presented — are especially problematic in consumer-facing applications where inaccurate financial information could lead to harmful decisions. Representatives from large banks confirmed that hallucination risk is a primary reason financial institutions avoid using generative AI for high-accuracy activities such as credit underwriting.

Operational and cybersecurity risks associated with AI could lead to technical breakdowns disrupting financial institution operations. The use of AI introduces failures related to internal processes, controls, and information technology, as well as risks from third-party dependencies. Novel adversarial attacks could manipulate AI systems to extract sensitive information, evade detection mechanisms, or make incorrect decisions. The National Institute of Standards and Technology has established trustworthy AI characteristics including security, resilience, and robustness that organizations should address to mitigate these risks.

Model Risk and Concentration Risk in AI-Driven Finance

Model risk represents a foundational concern in the GAO’s AI financial services analysis. AI models may underperform and result in financial losses or reputational harm due to data quality issues including incomplete, erroneous, or biased training data. The dynamic nature of AI models that continuously learn from live data introduces additional complexity, as shifts in underlying data characteristics can cause models to degrade in performance without warning. The Financial Stability Oversight Council notes that expert analysis may be needed to evaluate generative AI output accuracy, a resource-intensive requirement that challenges smaller financial institutions.

Compliance risk emerges when AI systems produce results that inadvertently violate laws or regulations. The GAO cites research showing that an AI model identified market manipulation as an optimal investment strategy without being explicitly programmed to do so, illustrating how AI can autonomously develop behaviors that breach legal boundaries. In algorithmic trading, AI systems that independently reach similar strategies could amplify market movements, potentially disrupting financial stability according to the CFTC.

Concentration risk from reliance on a small number of third-party AI service providers — including data providers, cloud platforms, and technology firms — could increase the financial system’s vulnerability to single points of failure. While the risk of concentration is not unique to AI, the technology’s requirements for significant computational power and massive datasets amplify this concern. The Financial Stability Board has noted that high concentration among AI service providers warrants careful monitoring, though some market participants argue that model and dataset diversity will provide sufficient natural mitigation.

Make complex regulatory reports interactive — your team absorbs 3x more when they can explore the content.

Get Started →

Federal Regulatory Oversight Frameworks for AI in Finance

Federal financial regulators primarily oversee AI in financial services through existing laws, regulations, guidance, and risk-based examination processes. Officials from most regulators told the GAO they believe existing statutory authorities are generally sufficient to supervise regulated entities’ AI use at this time. The technology-neutral approach means that lending laws, consumer protection requirements, and safety and soundness standards apply equally whether decisions are made with traditional methods or AI models.

The oversight framework involves seven federal regulators with distinct responsibilities. The prudential regulators — FDIC, Federal Reserve, OCC, and NCUA — incorporate AI review into broader examinations of safety and soundness, information technology, and regulatory compliance. The extent of examination depends on each institution’s AI use and risk management practices. The SEC and CFTC oversee AI use in securities and derivatives markets respectively, while CFPB monitors AI applications affecting consumer financial protection.

Model risk management guidance serves as a key oversight tool, particularly the jointly issued SR 11-7/OCC 2011-12 guidance that establishes expectations for how financial institutions should validate, monitor, and govern their models. However, industry representatives noted opportunities to clarify AI-related guidance, particularly regarding how existing model risk frameworks apply to newer generative AI applications that behave differently from traditional statistical models. Our analysis of AI governance frameworks in banking provides additional context on evolving regulatory approaches.

NCUA AI Oversight Gaps and GAO Recommendations

The GAO’s most significant finding centers on the National Credit Union Administration’s limited AI oversight capabilities. NCUA’s model risk management guidance has not been updated to address the complexities of modern AI systems, leaving credit unions without the detailed supervisory expectations that govern AI use at banks regulated by the Federal Reserve, OCC, and FDIC. This gap is particularly concerning as credit unions increasingly adopt AI-powered services from third-party technology providers.

Compounding this guidance gap, NCUA lacks third-party oversight authority that other prudential regulators possess. As credit unions rely more heavily on external AI vendors for core capabilities — from lending algorithms to fraud detection systems — the inability to directly examine these third-party providers creates a blind spot in the regulatory framework. NCUA officials themselves cited this authority gap as a reason existing authorities are insufficient for comprehensive AI supervision.

The GAO recommends that NCUA update its model risk management guidance to incorporate AI-specific considerations, aligning its supervisory framework with the standards maintained by other prudential regulators. This recommendation reflects the GAO’s assessment against NCUA’s own 2022-2026 strategic plan objectives, which include ensuring credit union safety and soundness in an evolving technology landscape. Implementation would help ensure that the approximately 4,600 federally insured credit unions serving over 130 million members have appropriate oversight of their AI deployments.

How Regulators Use AI to Enhance Supervisory Activities

Beyond overseeing industry AI use, federal financial regulators are themselves adopting artificial intelligence to enhance their supervisory activities. Regulators are leveraging AI tools to analyze examination data more efficiently, detect patterns in financial reporting that may indicate emerging risks, and improve the overall effectiveness of their oversight processes. This dual role — both supervising and deploying AI — creates unique governance challenges for regulatory agencies.

Different regulators are taking varying approaches to expanding their AI capabilities. Some agencies are developing internal AI tools and building dedicated data science teams, while others are leveraging commercial AI solutions adapted for regulatory use. The approaches reflect each agency’s specific mission, resources, and risk tolerance, with larger agencies like the Federal Reserve and SEC generally having more advanced AI programs than smaller regulators. AI applications in supervision include natural language processing for analyzing regulatory filings, machine learning for identifying outlier financial institutions, and predictive analytics for prioritizing examination resources.

The expansion of regulatory AI use raises important governance questions about accuracy, transparency, and fairness in automated supervisory decisions. Regulators must apply the same trustworthy AI principles to their own systems that they expect from the institutions they oversee, including validation, monitoring, and bias testing of AI tools used in examination and enforcement activities. The Government Accountability Office continues to monitor these developments to ensure that regulatory AI adoption enhances rather than compromises the quality of financial supervision.

Implications for AI Governance in Financial Services

The GAO’s comprehensive assessment of AI in financial services carries significant implications for the future of AI governance across the sector. The report’s findings suggest that while existing regulatory frameworks provide a solid foundation, the rapid pace of AI innovation — particularly in generative AI — will require ongoing adaptation of supervisory approaches. Financial institutions, regulators, and policymakers must collaborate to ensure that governance frameworks evolve alongside technological capabilities without stifling beneficial innovation.

For financial institutions, the report reinforces the importance of robust internal AI governance programs encompassing model risk management, data quality controls, bias testing, and transparency mechanisms. Institutions that proactively implement comprehensive AI governance will be better positioned to navigate regulatory expectations while maintaining competitive advantages from AI deployment. The emphasis on fair lending, consumer protection, and operational resilience provides clear priorities for enterprise AI risk management programs.

Looking ahead, several developments will shape the trajectory of AI oversight in financial services. Congressional attention to AI regulation continues to intensify, with the GAO report informing legislative deliberations on potential AI-specific requirements for financial institutions. International coordination through bodies like the Financial Stability Board and Bank for International Settlements will influence domestic regulatory approaches. The challenge for all stakeholders is balancing the substantial benefits AI offers to financial services against the real and evolving risks that demand vigilant, adaptive oversight frameworks that protect consumers and maintain financial stability.

Turn dense government reports into interactive experiences that drive compliance awareness across your organization.

Start Now →

Frequently Asked Questions

How are financial institutions using artificial intelligence?

According to the GAO report, financial institutions use AI across multiple activities including automated trading, credit decisions, customer service via chatbots, investment advisory through robo-advisers, risk management for credit and liquidity risk, countering illicit finance and fraud detection, and regulatory compliance. AI applications range from traditional machine learning models to newer generative AI capabilities.

What are the main risks of AI in financial services?

The GAO identifies seven key AI risks in financial services: fair lending risk from biased credit decisions, investor protection risks from conflicted AI advice, privacy risks from data leakage, consumer protection risks from chatbot hallucinations, operational and cybersecurity risks, model risk from data quality issues, compliance risk from inadvertent regulatory violations, and concentration risk from reliance on a small number of third-party AI providers.

How do federal regulators oversee AI use in financial services?

Federal financial regulators primarily oversee AI through existing laws, regulations, guidance, and risk-based examinations. Regulators such as the Federal Reserve, OCC, FDIC, SEC, CFTC, and CFPB apply technology-neutral frameworks that cover AI activities. Some have issued AI-specific guidance, and prudential regulators rely on model risk management frameworks originally designed for traditional quantitative models.

What is the GAO recommendation regarding NCUA AI oversight?

The GAO recommends that NCUA update its key AI oversight tools, specifically its model risk management guidance, which is currently limited compared to other prudential regulators. NCUA also lacks third-party oversight authority, which the agency cites as a significant gap given credit unions’ increasing reliance on third-party AI service providers.

Are federal regulators using AI in their own supervisory activities?

Yes, federal financial regulators are adopting AI to enhance their supervisory activities. Regulators are using AI tools for tasks such as analyzing examination data, detecting patterns in financial reporting, and improving efficiency of oversight processes. Different agencies are taking varying approaches to expanding AI use, with some developing internal AI capabilities and others leveraging commercial solutions.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup

Our SaaS platform, AI Ready Media, transforms complex documents and information into engaging video storytelling to broaden reach and deepen engagement. We spotlight overlooked and unread important documents. All interactions seamlessly integrate with your CRM software.