FSB Monitoring AI Adoption in the Financial Sector: Vulnerabilities, Indicators, and Risk Assessment
Table of Contents
- Introduction: Why the FSB Is Monitoring AI in Finance
- Key Vulnerabilities Identified by the FSB
- How Financial Authorities Currently Monitor AI Adoption
- Supervisory Reporting and Survey Approaches
- Third-Party Dependencies and Concentration Risk
- Market Correlations and Systemic AI Risk
- Cyber Risks and AI Model Governance Challenges
- Monitoring Indicators and Data Collection Strategies
- Recommendations for Regulators and Financial Institutions
- What This Means for the Future of AI in Finance
📌 Key Takeaways
- Four critical vulnerabilities: The FSB identifies third-party dependencies, market correlations, cyber risks, and model governance as the primary AI-related threats to financial stability.
- Monitoring is still early-stage: Despite progress, most financial authorities are only beginning to develop systematic approaches to tracking AI adoption and its risks across institutions.
- Concentration risk is paramount: Vertical integration by a small number of global technology providers creates systemic single points of failure in the AI supply chain serving finance.
- 28 jurisdictions surveyed: The FSB gathered responses from authorities in 19 jurisdictions plus one international organisation, revealing significant gaps in standardised definitions and comparable data.
- Actionable indicator framework: The report provides both direct and proxy indicators regulators can use immediately, from AI patent tracking to supervisory engagement protocols.
Introduction: Why the FSB Is Monitoring AI in Finance
Artificial intelligence is rapidly transforming the global financial sector, bringing unprecedented opportunities for efficiency, compliance, analytics, and product personalisation. Yet as adoption accelerates, so do the risks. The Financial Stability Board (FSB), the international body that monitors and makes recommendations about the global financial system, published a landmark report in October 2025 titled Monitoring Adoption of Artificial Intelligence and Related Vulnerabilities in the Financial Sector. This report builds on the FSB’s 2024 assessment of AI’s financial stability implications and represents a significant shift from identifying risks to actively monitoring them.
The timing is critical. Since the 2024 report, the AI landscape has evolved dramatically. High-performance, lower-cost open-weight models have emerged. Multi-step reasoning models have entered the market. Competition in hardware has intensified. Most significantly, global technology providers have expanded vertical integration — controlling models, infrastructure, and cloud services simultaneously. These developments, while driving innovation, create new concentration risks that traditional financial regulatory frameworks were never designed to address.
Requested by the South African G20 Presidency, this 2025 report draws on a member survey with 28 responses from authorities in 19 jurisdictions and one international organisation, supplemented by interviews, public data, and stakeholder outreach. For financial institutions, regulators, and anyone tracking the intersection of AI governance frameworks, this report is essential reading.
Key Vulnerabilities Identified by the FSB
The FSB’s framework organises AI-related financial stability risks into four distinct vulnerability categories, each requiring different monitoring approaches and regulatory responses. Understanding these categories is the foundation for any meaningful AI risk assessment in the financial sector.
Third-Party Dependencies and Service Provider Concentration
Financial institutions increasingly rely on external AI providers for critical functions — from cloud-based model hosting to pre-trained foundation models. When multiple institutions depend on the same provider, a single disruption can cascade across the sector. The FSB specifically highlights the risk of “supply chain layering,” where providers stack services (compute, storage, model access) from a limited number of upstream providers, amplifying concentration at invisible chokepoints.
Market Correlations
When financial institutions use similar AI models trained on overlapping datasets, their investment decisions, credit assessments, and trading strategies can become correlated in ways that amplify market volatility. The FSB notes this is particularly challenging to monitor because attribution is complex — isolating AI-driven correlations from other market dynamics requires sophisticated analytical approaches that most regulators have not yet developed.
Cyber Risks
AI introduces novel attack vectors into financial systems. Adversarial inputs can manipulate AI decision-making, model poisoning can compromise training data integrity, and the growing API surface area of AI-integrated systems expands the cyber attack perimeter. Simultaneously, malicious actors are leveraging AI to create more sophisticated phishing, deepfakes, and social engineering attacks targeting financial institutions. For organisations tracking cybersecurity trends in finance, the WEF Global Cybersecurity Outlook provides complementary analysis.
Model Risk, Data Quality, and Governance
The opacity of advanced AI models — particularly large language models and deep learning systems — makes traditional model validation approaches insufficient. Explainability gaps, data quality issues, and the challenge of governing rapidly evolving AI systems create risks that compound as institutions deploy AI in increasingly critical functions. Many authorities noted in the survey that distinguishing between technology-neutral and AI-specific governance requirements remains an unresolved tension in regulatory approach.
How Financial Authorities Currently Monitor AI Adoption
The FSB’s survey reveals that while most financial authorities are collecting some data on AI adoption, their approaches vary significantly in sophistication, scope, and frequency. This section examines the current monitoring landscape across jurisdictions, highlighting both progress and gaps.
The majority of survey respondents — representing prudential supervisors, market conduct regulators, and central banks — reported collecting data on AI adoption through at least one channel. The most common approach is targeted surveys of financial institutions, followed by analysis of publicly available data. Several authorities also rely on supervisory reporting and private data provider subscriptions. However, definitions of what constitutes “AI” vary widely across jurisdictions. Some use the OECD AI definition, others reference the EU AI Act, and some operate with jurisdiction-specific definitions or no AI-specific definition at all. This definitional inconsistency is a fundamental barrier to cross-border comparability and coordinated oversight.
Importantly, monitoring specific AI-related vulnerabilities is substantially more challenging than tracking adoption metrics. Many respondents acknowledged that AI-specific vulnerability data is often embedded within broader supervisory initiatives — such as third-party risk management or operational incident monitoring — making it difficult to isolate AI-related signals from background noise. Few jurisdictions have dedicated data collection specifically targeting AI-driven market correlations, which the FSB identifies as one of the most significant systemic risks.
Supervisory Reporting and Survey Approaches
Financial regulators employ a spectrum of data collection mechanisms to track AI adoption, each with distinct strengths and limitations. The FSB’s survey identified four primary approaches: supervisory reporting, surveys, industry outreach, and publicly available data analysis.
Supervisory Reporting
Over one-third of survey respondents with supervisory authority reported having AI-related supervisory reporting requirements in place. This approach provides the most granular and reliable data — institutions must report specific AI use cases, governance frameworks, and incident data to their regulators. However, it is also the most resource-intensive for both firms and authorities, and tends to be less common than lighter-touch survey approaches. Several authorities without formal reporting requirements maintain ongoing supervisory engagement with the largest institutions as a pragmatic alternative.
Surveys of Financial Institutions
Surveys are the most widespread monitoring tool. A substantial majority of respondents have conducted AI usage surveys, though most are voluntary. Notable examples include the Bank of England and FCA’s joint AI survey in the UK and the Financial Services Agency of Japan’s (JFSA) comprehensive sector survey. IOSCO has also conducted an AI survey focused on capital markets. While surveys can reach a broad range of institutions efficiently, the FSB flags several challenges: voluntary surveys suffer from selection bias, aggregate results may mask important variation, and follow-up discussions are often required to contextualise findings.
Industry Outreach
Roundtables, workshops, innovation hubs, sandboxes, and bilateral engagement sessions provide richer qualitative insights than structured surveys alone. The report highlights successful examples including the US Treasury’s 2021 Request for Information on AI in financial services, the Bank of England/FCA Artificial Intelligence Public-Private Forum (AIPPF), and the IMF’s bilateral outreach for its Global Financial Stability Report. These approaches offer deeper understanding but are costly and firms may be reluctant to share competitive details in group settings.
Transform complex regulatory documents into interactive experiences your team actually reads and understands.
Third-Party Dependencies and Concentration Risk
Of the four vulnerability categories, third-party dependencies and service provider concentration receive the most detailed treatment in the FSB report — and for good reason. The rapid evolution of the AI supply chain has created concentration risks that extend well beyond what traditional vendor risk management frameworks were designed to capture.
The report includes a dedicated case study on generative AI (GenAI) that illustrates how supply chain layering creates hidden dependencies. When a financial institution uses a GenAI application built on a foundation model hosted on a specific cloud provider’s infrastructure, it may appear to have a relationship with one vendor. In reality, the chain involves the application provider, the model developer, the compute provider, and potentially multiple infrastructure layers — each potentially controlled by the same small group of global technology companies.
The FSB identifies vertical integration as a particularly concerning trend. When a single technology provider controls the model, the training infrastructure, the cloud hosting, and the API access layer, financial institutions face both operational and strategic lock-in. Switching costs become prohibitive, alternative options narrow, and the systemic risk from a single provider’s failure or policy change multiplies across all dependent institutions simultaneously.
To monitor this vulnerability, the FSB suggests several specific indicators: the share of financial institutions’ AI applications provided by third parties, registers of critical AI services and providers based on firm submissions, and the number of systemically important financial institutions for which third-party AI services support critical operations. The report also recommends tracking relative cost and performance of widely used AI services as a proxy for substitutability — lower substitutability implies higher concentration risk. For deeper insights into how AI is reshaping enterprise technology dependencies, the Accenture Technology Vision 2025 offers a corporate perspective on AI autonomy.
Market Correlations and Systemic AI Risk
Perhaps the most intellectually challenging vulnerability the FSB examines is the potential for AI to amplify market correlations. When financial institutions adopt similar AI models — particularly widely used pre-trained foundation models — their automated decision-making may converge, creating herding behaviour that amplifies market movements in both directions.
The report is notably candid about the difficulty of monitoring this risk. Direct indicators of AI-driven market correlations are largely unavailable. Attribution is complex: separating AI-driven correlation from correlation caused by common information, shared regulatory constraints, or parallel human analysis requires analytical sophistication that most regulators have not yet developed. The FSB acknowledges that “limited transparency in AI adoption, causality challenges, and model mis-specification risk under shifting market regimes” make this a particularly stubborn monitoring problem.
As proxy indicators, the FSB suggests tracking the number of financial institutions using each widely used pre-trained model, along with model features and training data sources. If many institutions use the same model with similar fine-tuning, their outputs may be more correlated than if they used diverse approaches. Analytical measures the report proposes include regression analysis, event studies, machine learning models, and network analysis to detect associations between AI adoption patterns and asset price volatility or correlation spikes.
This concern is not theoretical. In algorithmic trading, concentration on similar strategies has already produced flash crashes and amplified sell-offs. As AI extends into credit assessment, insurance underwriting, and portfolio management, the potential for correlated AI decisions to synchronise across the financial system grows proportionally. The Basel Committee’s supervisory framework for AI provides additional context on how prudential regulators are approaching model risk in this space.
Turn dense financial reports into engaging, interactive experiences that drive real comprehension across your organisation.
Cyber Risks and AI Model Governance Challenges
The FSB’s treatment of cyber risks and model governance reflects a sector grappling with fundamentally new challenges that existing regulatory frameworks were not built to address.
AI-Amplified Cyber Threats
Many survey respondents collect general cyber incident data that may include AI-related incidents, but few have implemented monitoring specifically targeting AI as a vector or target of cyberattacks. Several authorities include AI-specific questions in their broader cybersecurity surveys, and some monitor public databases of AI cyber incidents, including the OECD AI Incidents Monitor. However, the rapid evolution of AI-powered attack techniques — from sophisticated phishing using large language models to deepfake-enabled social engineering — means that monitoring approaches must constantly adapt. For comprehensive threat intelligence, the Mandiant M-Trends 2025 report provides detailed analysis of current threat landscapes.
Model Risk and Governance
The governance challenge is multifaceted. Many authorities include survey questions on AI governance, model risk management, and explainability. Several collect qualitative model risk data through supervisory discussions or thematic reviews. But only a limited number have issued AI-specific model governance guidance. The tension between technology-neutral and AI-specific regulatory approaches remains unresolved across most jurisdictions.
A particular challenge flagged by respondents is the classification of modified third-party models. When a financial institution fine-tunes a pre-trained model, should it be classified as an internally developed model or a third-party model? Different institutions — and different jurisdictions — answer this question differently, creating inconsistencies in both risk reporting and governance frameworks. This seemingly technical question has significant implications for how concentration risk, model validation responsibilities, and regulatory oversight are allocated.
Monitoring Indicators and Data Collection Strategies
One of the most practically valuable contributions of the FSB report is its structured framework of monitoring indicators. The report presents both direct indicators — requiring collection from financial institutions — and proxy indicators that can be derived from publicly available sources. Together, they provide regulators with an actionable toolkit for building AI monitoring capabilities.
Direct Indicators for AI Adoption
The FSB recommends that authorities develop inventories of AI use cases in the financial sector, broken down by financial activity (trading, lending, insurance, payments), types of AI (generative, agentic, traditional ML), and levels of materiality (core business, critical operations, low-risk internal processes). Additionally, tracking the share of institutions using AI by use case, model type, and firm size provides a quantitative foundation for adoption monitoring.
Proxy Indicators
For authorities with limited direct collection capabilities, the report identifies several proxy sources. AI-related patent applications, available through databases like WIPO PATENTSCOPE, provide insight into innovation trajectories — the FSB notes that US G-SIBs filed over 1,400 AI-related patent applications over the past decade. AI-related job postings from platforms like Indeed Hiring Lab and Lightcast signal workforce investment. The US Census Bureau’s Business Trends and Outlook Survey (BTOS) provides biweekly frequency data on AI adoption, revealing that securities and investment firms are more likely to use AI than lenders or insurers, though lenders’ AI usage has increased sharply in recent periods. Textual analysis of public filings, earnings transcripts, and research papers offers another window into institutional AI strategy.
Design Considerations for Data Collection
The FSB emphasises five key principles for effective AI monitoring programmes: relevance to identified vulnerabilities, representativeness across institution types and sizes, alignment with emerging standards and taxonomies, timeliness given rapid AI evolution, and proportional burden that leverages existing reporting frameworks wherever possible. These design principles acknowledge a fundamental tension in AI monitoring — the need for comprehensive data must be balanced against the cost and complexity of collection in a rapidly evolving domain.
Recommendations for Regulators and Financial Institutions
The FSB concludes with a set of high-level recommendations directed at both national authorities and international standard-setting bodies, providing a roadmap for strengthening AI monitoring across the global financial system.
For National Authorities
- Enhance monitoring programmes using the indicators framework provided in the report, adapting to local adoption levels and regulatory capacity.
- Formalise metrics and taxonomies through domestic collaboration between sectoral regulators — prudential, market conduct, and central banks should share AI-related data rather than collecting separately.
- Increase engagement with regulated institutions to better understand AI deployment patterns, particularly in critical operations where AI failures could have systemic consequences.
- Explore AI tools for supervision — regulators themselves can leverage AI for fraud detection, cyber defence, anomaly detection, and processing supervisory data at scale.
- Promote cross-authority data sharing to build more complete pictures of sector-wide AI adoption and dependencies.
For the FSB and Standard-Setting Bodies
- Facilitate cross-border cooperation by sharing monitoring experiences and good practices across jurisdictions.
- Work toward alignment in AI taxonomies, definitions, and indicators to enable meaningful cross-border comparison.
- Continue monitoring AI developments and addressing data gaps — the report explicitly positions this as an ongoing programme, not a one-time assessment.
The report also acknowledges practical challenges that members highlighted: simplifying surveys to increase response rates, embedding AI questions into existing monitoring frameworks rather than creating new standalone collections, and developing strategies for monitoring AI providers that fall outside traditional financial supervision scope. These are not merely administrative concerns — they reflect the fundamental difficulty of applying sector-specific regulation to a technology ecosystem that spans far beyond finance.
What This Means for the Future of AI in Finance
The FSB’s October 2025 report represents a pivotal moment in the regulatory response to AI in finance. It marks the transition from awareness to action — from identifying risks in abstract terms to providing specific indicators, data sources, and monitoring strategies that authorities can implement.
Several implications stand out for the financial sector. First, regulation will increasingly focus on AI supply chain transparency. Financial institutions should expect more detailed reporting requirements around their AI dependencies, vendor relationships, and critical AI service concentration. Second, the definitional challenge will eventually drive convergence — the current patchwork of AI definitions across jurisdictions is unsustainable for globally operating institutions, and pressure for harmonisation will build. Third, the market correlation risk, while hardest to monitor, may ultimately prove most consequential — as AI models become more capable and pervasive, the potential for synchronised AI-driven decisions to amplify market dislocations grows.
For financial institutions proactively preparing for this regulatory direction, the message is clear: build internal AI inventories, map your third-party AI dependencies, assess the criticality of AI in your operations, and develop governance frameworks that can withstand scrutiny. The institutions that invest in AI governance now will be better positioned as monitoring requirements formalise into binding regulation. Understanding how high-risk AI assessment frameworks apply to financial services is increasingly essential for compliance teams and risk officers.
The FSB’s work also highlights a broader truth about AI governance: effective oversight requires collaboration between regulators, institutions, technology providers, and international bodies. No single entity can monitor or manage the systemic risks that emerge when AI reshapes an interconnected global financial system. The indicators and approaches outlined in this report are starting points — the real challenge lies in building the institutional capacity, data infrastructure, and international cooperation needed to turn monitoring into meaningful risk mitigation.
Make regulatory intelligence accessible to every stakeholder — transform FSB reports and policy documents into interactive experiences.
Frequently Asked Questions
What are the main AI vulnerabilities identified by the FSB in the financial sector?
The FSB identifies four primary vulnerability categories: third-party dependencies and service provider concentration, market correlations driven by common AI models, cyber risks amplified by AI adoption, and challenges in model risk management, data quality, and governance. Each poses distinct risks to financial stability as AI adoption accelerates across institutions.
How does the FSB recommend financial authorities monitor AI adoption?
The FSB recommends a multi-layered approach combining supervisory reporting, targeted surveys of financial institutions, industry outreach through roundtables and innovation hubs, and analysis of publicly available data such as patent filings and job postings. Authorities should balance monitoring ambition with proportional burden on institutions.
Why is third-party AI concentration a financial stability risk?
When many financial institutions rely on the same small number of AI service providers, a failure or disruption at one provider can cascade across the entire financial system. The FSB notes vertical integration by global technology companies controlling models, infrastructure, and cloud services creates single points of failure that traditional risk frameworks may not adequately capture.
What data sources can regulators use to track AI adoption in finance?
Regulators can use direct sources like supervisory reporting and institution surveys, and proxy indicators including AI patent applications from WIPO PATENTSCOPE, job posting trends from platforms like Indeed and Lightcast, business surveys like the US BTOS, textual analysis of public filings, and vendor data on AI spending. The US BTOS found securities firms more likely to use AI than lenders or insurers.
How does the FSB 2025 report differ from its 2024 AI assessment?
The 2024 FSB report identified AI-related vulnerabilities and their financial stability implications at a high level. The 2025 report shifts to practical monitoring — it provides specific indicators, proxy measures, data collection strategies, and a case study on GenAI supply chain concentration. It also incorporates survey data from 28 responses across 19 jurisdictions to benchmark current monitoring practices.