FSB AI Financial Sector Vulnerabilities: Monitoring Adoption, Concentration and Emerging Risks
Table of Contents
- FSB AI Report Overview: Scope and Context
- AI Adoption Trends Across the Financial Sector
- Third-Party Dependencies and AI Supply Chain Concentration
- The Five Layers of the Generative AI Supply Chain
- AI-Driven Market Correlations and Herding Risks
- Cyber Risks From AI Adoption in Finance
- AI Model Risk, Governance and LLM Hallucinations
- FSB Monitoring Framework and Proposed Indicators
- Open-Weight Models, Vertical Integration and Future Outlook
- Policy Recommendations for AI Financial Stability
📌 Key Takeaways
- Concentration at Every Layer: The AI supply chain is concentrated across hardware, cloud infrastructure, foundation models and applications, with the top three cloud providers controlling approximately three-quarters of financial sector usage.
- Monitoring Still Early Stage: While most financial authorities collect AI adoption data, monitoring efforts focus on use cases rather than specific vulnerabilities, and market correlations remain nearly impossible to track.
- Third-Party Risk Paradigm Shift: Pre-trained AI models introduce third-party risk without formal contractual relationships, challenging traditional outsourcing-based risk management frameworks.
- Four Core Vulnerabilities: The FSB flags third-party concentration, market correlations, cyber risks, and model governance as the primary AI-related threats to financial stability.
- Vertical Integration Intensifying: Technology providers are bundling hardware, cloud and AI services, increasing switching costs and limiting interoperability for financial institutions.
FSB AI Financial Sector Report: Why This Matters Now
The Financial Stability Board’s October 2025 report on monitoring AI adoption and related vulnerabilities in the financial sector represents the most comprehensive international regulatory assessment of how artificial intelligence is reshaping financial services — and the systemic risks it introduces. Requested by the South African G20 Presidency and building on the FSB’s November 2024 analysis of AI financial stability implications, this report draws on a survey of 28 responses from authorities in 19 jurisdictions plus one international organisation, complemented by interviews and stakeholder outreach with financial institutions, academics and AI service providers.
The timing is critical. Generative AI adoption across banking, insurance, securities and investment management has accelerated dramatically, with UK data showing that foundation model use cases now account for 17% of all AI use cases in financial services. Yet monitoring capabilities have not kept pace: most authorities still focus on cataloguing adoption patterns rather than measuring the specific vulnerabilities that could trigger systemic disruption. This interactive analysis breaks down every major finding from the FSB AI financial sector report and maps the monitoring framework regulators are building to track these risks. For related analysis of how regulators approach financial system oversight, explore our interactive library of financial regulation reports.
AI Adoption Trends Reshaping Financial Services in 2025
The FSB report documents a financial sector in rapid transformation. US data from the Business Trends and Outlook Survey (BTOS) reveals that securities and investment firms use AI more frequently than lending institutions and insurance companies, although lender AI usage has increased sharply through 2025. US Global Systemically Important Banks (G-SIBs) have filed over 1,400 AI-related patent applications over the past decade, with AI patents as a share of total bank patents growing steadily before levelling off in recent years — suggesting a shift from experimental research toward operational deployment.
In the United Kingdom, the Bank of England and Financial Conduct Authority’s 2024 survey reveals a structural shift in how AI capabilities are sourced. Third-party implementations now account for 33% of AI use cases in UK financial services, up from 17% in 2022 — nearly doubling external dependency in just two years. Meanwhile, in Switzerland, FINMA’s 2024 survey found that over 90% of respondents using AI are leveraging generative AI chatbots provided by external AI firms, with smaller institutions relying entirely on service providers for their AI applications.
These adoption patterns create a paradox at the heart of the FSB’s concern: AI is delivering genuine efficiency gains in compliance, analytics, customer service and risk management, but the speed and concentration of adoption are creating new channels through which shocks could propagate across the financial system. For many financial institutions, the choice is increasingly between using concentrated third-party AI services or foregoing generative AI entirely — a dynamic that amplifies systemic concentration risk with each new deployment.
Third-Party Dependencies and AI Service Provider Concentration
The most extensively analysed vulnerability in the FSB AI financial sector report is third-party dependency and service provider concentration. The report documents concentration at every layer of the AI supply chain, from the silicon that powers model training to the applications that financial institutions deploy in production. The hardware market is identified as “currently the most concentrated aspect of the AI supply chain,” with GPU production dominated by a small number of manufacturers whose chips are essential for training and running large language models.
Cloud computing services, the infrastructure layer on which most AI workloads run, are “significantly concentrated among a few global technology providers.” UK data shows that the top three cloud service providers account for approximately three-quarters of cloud usage across the financial sector. At the model layer, concentration is accelerating: the top three model providers accounted for 44% of named providers in 2024, up sharply from 18% in 2022. This concentration trend reflects the enormous capital requirements for training frontier models and the network effects that drive adoption toward established providers.
Perhaps most consequentially, the FSB identifies a fundamental challenge for traditional risk management: pre-trained AI models “often do not involve formal contractual relationships but nevertheless introduce third-party risk.” Financial institutions downloading and deploying open-weight models, using API-based model services, or embedding AI capabilities through software vendors are all acquiring third-party dependencies that may not be captured by existing outsourcing frameworks. This paradigm shift — from contractual vendor relationships to diffuse, often informal technological dependencies — requires entirely new approaches to monitoring, risk assessment and supervisory oversight.
Turn complex regulatory reports into interactive experiences your compliance team will actually engage with.
The Five Layers of the Generative AI Supply Chain
The FSB’s case study on GenAI third-party dependencies introduces a five-layer framework for understanding the AI supply chain that financial institutions depend upon. The first layer is hardware — computing chips and GPUs that provide the raw processing power for model training and inference. The second is computing infrastructure — cloud services that provide scalable access to hardware resources. The third layer is training data — the datasets used to train foundation models, whose quality, provenance and potential biases directly influence model outputs. The fourth layer comprises pre-trained foundation models — large language models and other AI systems that serve as the base technology. The fifth layer encompasses user-facing applications — the tools and interfaces through which financial institutions interact with AI capabilities.
What makes this framework particularly valuable for risk assessment is the FSB’s analysis of how concentration and dependencies operate differently at each layer. Hardware concentration is driven by manufacturing economics and intellectual property barriers that create near-monopolistic conditions. Cloud concentration reflects economies of scale and switching costs that lock institutions into specific providers. Model concentration arises from the enormous capital requirements for training frontier systems — creating a dynamic where the choice for many financial institutions is “between using these models or no use of GenAI at all.”
The report further highlights how vertical integration across these layers is intensifying systemic risk. Global technology providers are combining services across hardware, cloud infrastructure and AI models, imposing conditions that discourage non-affiliated technologies and expanding into adjacent markets for data, energy and specialised hardware. When a single provider supplies services across multiple layers, the failure or disruption of that provider affects a financial institution’s AI capabilities at multiple points simultaneously, creating correlated risk that is difficult to hedge or diversify.
AI-Driven Market Correlations and Systemic Herding Risks
Among the four core vulnerabilities identified by the FSB, market correlations emerge as both the most concerning and the most difficult to monitor. The fundamental mechanism is straightforward: when multiple financial institutions use similar AI models trained on similar data to make trading, lending and investment decisions, their actions become correlated in ways that can amplify market moves, create liquidity crunches and increase pro-cyclicality during stress episodes.
The FSB notes that “homogenisation in training data and model architecture can lead to correlated outputs, which could amplify market stress and exacerbate liquidity crunches.” This concern extends beyond algorithmic trading to any domain where AI models influence financial decision-making — credit underwriting, risk management, portfolio construction and market-making. If the same foundation models and training methodologies underpin decisions across multiple institutions, the potential for correlated errors or simultaneous rebalancing during market dislocations becomes a systemic vulnerability.
Yet the report is candid about the difficulty of monitoring this risk. Direct indicators — such as measures of interactions between AI models or stress testing results that capture AI-induced correlations — are not currently available. Authorities must rely on proxy indicators, and “there is still little empirical evidence that AI-driven market correlations affect market outcomes.” The FSB has flagged this monitoring challenge since as early as 2017, and progress remains limited. Few surveyed authorities collect data on AI-related market correlations, making this the vulnerability with the largest gap between potential impact and current monitoring capability.
Cyber Risks From AI Adoption in Financial Services
The FSB’s analysis of cyber risks operates on a dual axis: AI as a tool that augments malicious actors’ capabilities and AI adoption as a source of new attack surfaces. On the offensive side, generative AI enables more sophisticated phishing attacks, deepfake-based social engineering, and automated vulnerability discovery. On the defensive side, the intense data requirements and novel system interactions inherent in AI deployments create expanded attack opportunities that traditional cybersecurity frameworks may not fully address.
Specific attack vectors highlighted include data and model poisoning — where adversaries manipulate training data or model parameters to produce biased or harmful outputs — and prompt injection, where carefully crafted inputs cause language models to bypass safety controls or reveal sensitive information. The report also identifies AI-driven financial fraud and disinformation as an emerging fifth vulnerability category. Generative AI capabilities for creating deepfakes, synthetic identities and fraudulent documentation are advancing rapidly, with potential to erode trust in digital financial services and, in extreme scenarios, trigger flash crashes or bank runs through AI-generated disinformation campaigns.
The monitoring landscape for cyber risks is more developed than for market correlations, as many jurisdictions already collect general cyber incident data that captures AI-related cases. However, the FSB recommends more granular tracking that distinguishes AI-specific cyber incidents, categorises attack types, monitors third-party AI provider incidents, and assesses AI use cases for cyber defence — creating a comprehensive picture of how AI is reshaping both the threat landscape and the defensive toolkit for financial institutions. For more on how technology is transforming financial services, explore our collection of technology and innovation analyses.
Make AI governance reports interactive — help your team understand complex regulatory frameworks faster.
AI Model Risk, Governance and the Challenge of LLM Hallucinations
The fourth core vulnerability — model risk, data quality and governance — addresses the fundamental challenge that AI systems, particularly large language models, introduce new categories of risk that existing model risk management frameworks were not designed to handle. The FSB highlights limited explainability as a structural concern: many AI approaches produce outputs through processes that neither their developers nor their users can fully explain, creating accountability gaps in regulated financial decision-making.
LLM hallucinations receive particular attention. These “seemingly confident but inaccurate outputs” represent a qualitatively different risk from traditional model errors. The FSB notes that “it is more challenging to assess the quality and accuracy of LLM outputs than quantitative forecasts made by machine learning models,” because language model outputs can appear authoritative and well-reasoned while being factually wrong. In financial contexts — regulatory compliance, client advisory, risk assessment — hallucinated outputs could trigger incorrect decisions with material consequences.
The concept of misaligned AI systems features prominently: systems “whose objectives, outputs, or decision-making processes deviate from intended standards” represent a governance vulnerability that the report acknowledges “practices to identify and address are not well developed.” The accessibility of generative AI compounds this risk, as the low barrier to deployment may incentivise financial institutions to adopt AI tools without implementing requisite governance frameworks. A limited number of surveyed authorities have issued guidance specific to AI model governance, suggesting that the regulatory toolkit remains nascent relative to the pace of adoption.
FSB Monitoring Framework: Indicators, Design Principles and Implementation
The most operationally significant contribution of the FSB AI financial sector report is its comprehensive monitoring framework, organised around six indicator categories with five design principles. For AI adoption, the framework recommends inventories of use cases by financial activity, AI type and materiality level, supplemented by proxy indicators including AI patent applications, AI-related job postings, R&D spending and textual analysis of public disclosures.
For third-party dependencies, the proposed indicators span the share of AI applications sourced from third parties, incident notifications affecting AI providers, registers of critical AI services, and the number of systemically important institutions using third-party AI for critical operations. The market correlation category relies primarily on proxy measures — the number of institutions using each widely deployed model, analytical measures of association between AI adoption and asset price volatility, and information on the level of autonomy granted to AI models in key markets.
Cyber risk indicators include categorised AI-related attack counts, internal and third-party incident tracking, and AI deployment for cyber defence. Model risk and governance indicators cover the share of AI models in institutional inventories, supervisory finding trends, and the degree of automated decision-making. AI-driven fraud indicators track generative AI fraud cases, disinformation prevalence and customer complaints. The five design principles — relevance to vulnerabilities, representativeness across institution types, standards alignment, timeliness, and burden sensitivity — provide a practical framework for authorities implementing monitoring programmes while managing the resource constraints that both regulators and financial institutions face.
Open-Weight Models, Vertical Integration and the Evolving AI Landscape
The FSB report incorporates significant developments in the AI supply chain since its 2024 predecessor, assessing their implications for concentration and financial stability risk. The emergence of open-weight models such as DeepSeek R1 has generated optimism about reducing concentration, as these models lower barriers to entry by allowing financial institutions to deploy, fine-tune and audit AI capabilities without relying on proprietary providers. Six of the top 25 LLMs on the LMArena Leaderboard are published under open-source licences, developed by DeepSeek, Alibaba, Minimax and Google.
However, the FSB tempers this optimism with important caveats. Proprietary models remain “more widely performant and widely used,” suggesting that open-weight alternatives have not yet achieved the capability levels required for the most demanding financial applications. The evolution of reasoning models — which use substantially more inference computing power — creates a tension between lower fixed costs that could attract competition and higher marginal costs that may still favour well-resourced providers.
Most significantly, vertical integration continues to intensify. Global technology providers are bundling services across hardware, cloud infrastructure and AI models, imposing conditions that discourage the use of competing technologies, and expanding into adjacent markets for data, energy and specialised hardware. This integration increases switching costs for financial institutions and limits interoperability between providers, potentially offsetting the diversification benefits that open-weight models might otherwise deliver. The net effect on concentration remains uncertain, and the FSB positions ongoing monitoring of these supply chain dynamics as essential for financial stability oversight.
Policy Recommendations: Building Resilient AI Oversight for Finance
The FSB concludes with three tiers of recommendations that collectively aim to build monitoring capabilities proportionate to the systemic importance of AI in finance. For national authorities, the report urges enhanced monitoring using the proposed indicator framework, collaboration with domestic stakeholders to formalise metrics, deeper engagement with regulated institutions, exploration of AI tools for monitoring and mitigation, and greater data sharing across sectoral financial regulators and with non-financial authorities.
For the FSB and standard-setting bodies, the priority is facilitating cross-border cooperation, working toward alignment in taxonomies and indicators, and supporting capacity building for authorities with fewer resources. The report explicitly acknowledges that AI risks cross national boundaries and regulatory perimeters, making international coordination essential for effective monitoring. The findings will inform future FSB work on AI, with particular attention to the most challenging monitoring areas: market correlations, model risk, data quality, governance gaps and misaligned AI systems.
The overarching policy message is one of calibrated urgency. AI is delivering genuine benefits to financial services and the report explicitly seeks to balance safeguarding stability with supporting safe innovation. But the current monitoring infrastructure — characterised by voluntary surveys, inconsistent definitions, annual or less frequent data collection, and minimal coverage of the most dangerous vulnerabilities — is not keeping pace with adoption. The FSB’s framework provides a roadmap for closing this gap, but implementation depends on national authorities acting with the speed and coordination that the AI transformation demands. For more analyses of how regulators are adapting to technological disruption, browse our full interactive library.
Transform your next regulatory report into an interactive experience that drives real engagement.
Frequently Asked Questions
What AI vulnerabilities does the FSB identify for the financial sector?
The FSB identifies four core vulnerabilities: third-party dependencies and service provider concentration across the AI supply chain, market correlations from similar AI models amplifying herding behaviour, cyber risks from AI-augmented attacks and expanded attack surfaces, and model risk including LLM hallucinations, limited explainability and governance gaps. A fifth emerging vulnerability is AI-driven financial fraud and disinformation.
How concentrated is the AI supply chain for financial institutions?
Concentration exists at every layer of the GenAI supply chain. The hardware market is the most concentrated, with GPU production dominated by few manufacturers. The top three cloud service providers account for approximately three-quarters of cloud usage. Top three model providers accounted for 44% of named providers in 2024, up from 18% in 2022. Vertical integration is intensifying as technology providers bundle services across hardware, cloud and AI models.
How are financial authorities monitoring AI adoption?
A large majority of surveyed authorities collect data on AI adoption through supervisory monitoring, surveys and publicly available data analysis. Over a third have AI-related supervisory reporting in place. However, monitoring efforts are still at an early stage, focusing primarily on adoption patterns rather than specific vulnerabilities. Less than half collect data annually and market correlations remain the most difficult vulnerability to monitor.
What monitoring framework does the FSB propose for AI risks?
The FSB proposes indicators across six categories: AI adoption inventories, third-party dependency metrics, market correlation proxies, cyber incident tracking, model risk and governance assessments, and AI-driven fraud monitoring. The framework emphasises five design principles: relevance to vulnerabilities, representativeness across FI types, standards alignment, timeliness of collection, and proportional burden sensitivity.
What role do open-weight AI models play in reducing financial sector concentration risk?
Open-weight models like DeepSeek R1 could lower barriers to entry and reduce concentration in the AI supply chain. However, the FSB notes that proprietary models remain more widely performant and widely used. The report also highlights that reasoning models require more inference computing power, and vertical integration by technology providers continues to intensify, potentially offsetting the diversification benefits of open-weight alternatives.