AI in Financial System: Bank of England Stability Analysis 2025

📌 Key Takeaways

  • Productivity potential: Generative AI could boost financial sector productivity by up to 30% over the next 15 years across banking, insurance, and capital markets
  • Systemic mispricing risk: Common AI model weaknesses could cause multiple banks to simultaneously misprice risk, echoing 2008 crisis dynamics
  • Autonomy frontier: 55% of financial AI use cases already involve some autonomous decision-making, though only 2% are fully autonomous today
  • Provider concentration: Heavy reliance on a small number of AI vendors creates single-point-of-failure risk similar to the July 2024 CrowdStrike outage
  • Cyber arms race: AI simultaneously empowers both financial institutions’ defences and adversaries’ attack capabilities, creating an uncertain stability outlook

Why AI Represents a Financial System Discontinuity

The Bank of England’s Financial Policy Committee (FPC) published its landmark analysis of artificial intelligence in the financial system in April 2025, and the central message is striking: AI represents a “discontinuity relative to previous modelling technologies” rather than an incremental evolution. This is not simply a faster spreadsheet or a more sophisticated regression model. Advanced AI models are dynamic, learning automatically from new input data. Their outputs evolve over time, they process vast volumes of data at scales fundamentally different from any previous analytical tool, and they can make complex decisions autonomously.

The report, chaired by Bank of England Governor Andrew Bailey alongside Deputy Governor Sarah Breeden and PRA CEO Sam Woods, positions AI as a general-purpose technology capable of catalysing scientific and technical breakthroughs across computing, medicine, and financial services. For the United Kingdom specifically, this matters enormously: the UK is the third largest destination for AI investment globally, and the government’s AI Opportunities Action Plan has explicitly highlighted AI’s potential to enhance national competitiveness. The intersection of regulatory ambition and technological disruption creates both unprecedented opportunity and systemic vulnerability—a tension that the FPC navigates throughout this comprehensive assessment.

What makes this analysis particularly valuable for financial professionals, risk managers, and technology leaders is its honest acknowledgment of radical uncertainty. The FPC repeatedly emphasizes the “high degree of uncertainty” over how AI technology and its adoption will evolve, noting that some applications may fail to meet initial promise while entirely unforeseen developments could have profound impacts. This intellectual humility, combined with rigorous scenario analysis, makes the report essential reading for anyone navigating AI strategy in financial services. For a broader perspective on how regulatory bodies worldwide are addressing AI challenges, explore our analysis of the FSB’s framework for monitoring AI adoption in the financial sector.

AI Adoption Landscape in UK Financial Services

The FPC’s 2024 AI Survey provides the most detailed map yet of how artificial intelligence is penetrating British financial services. The top near-term use cases over the next three years cluster around operational efficiency and risk management: optimising internal processes leads the pack, followed by enhancing customer support, combatting financial crime including fraud detection and anti-money laundering, cybersecurity hardening, product and service promotion, client profiling and transaction clustering, forecasting and business modelling, and regulatory compliance and reporting.

Generative AI specifically is reshaping workflows across institutions. Code generation has become a standard use case, alongside information search and retrieval, streamlining internal functions, and AI-powered analytics that enhance customer interactions—such as predicting preferred payment options. One of the most revealing statistics from the survey: approximately 55% of respondents’ AI use cases involve some form of autonomous decision-making, while only 2% are described as fully autonomous. This gap between partial and full autonomy defines the current frontier and the transition zone where risks are most acute.

Credit risk management illustrates both the promise and the caution. The FPC notes that AI use in credit decisions remains “in its infancy” at the aggregate level, but pioneering firms are already deploying gradient boosting decision tree models across pre-screening, application scoring, pricing, and provisioning. In insurance, AI-based models are already widely used for pricing and underwriting, with telematics integration expanding beyond motor insurance to enable more tailored products. The trajectory is clear: AI will progressively move from back-office optimization to core financial decision-making, raising the stakes for both performance and risk management.

In trading and markets, the picture is further advanced. Established AI techniques like decision trees are already deployed in algorithmic trading within highly liquid markets. The IMF reports that over half of all patents filed by high-frequency or algorithmic trading firms now relate to AI. While fully autonomous AI-based trading models are not yet in widespread production, the FPC considers their deployment a plausible near-term scenario, particularly as investment managers leverage generative AI to exploit alternative data sets—including social media content—to discover previously unknown relationships between economic and financial variables.

AI-Driven Credit Risk and Core Decision-Making

The first major risk area identified by the FPC concerns the growing use of AI in banks’ and insurers’ core financial decision-making. The benefits are substantial: enhanced product offerings, greater consumer choice, improved accuracy of financial risk management, and the potential to widen access to finance—particularly for SMEs—through the use of broader data sources. But these benefits carry embedded risks that demand institutional and systemic attention.

At the firm level, the FPC highlights a fundamental challenge: the lack of explainability combined with increasing autonomy means financial institutions may take on risk-taking positions they do not properly understand at the point of decision. This is not a hypothetical concern. The 2024 AI Survey revealed that approximately 50% of respondents acknowledge only a “partial understanding” of the AI technologies they deploy. When models trained on vast datasets make decisions that even their operators cannot fully trace, model risk and data integrity challenges become existential threats to sound financial management.

The systemic implications are where the report draws its most striking parallel. Common weaknesses in widely used AI models could cause many firms to simultaneously misestimate risk, misprice credit, and misallocate capital. The reliance on shared open-source model components or data libraries creates vulnerability. When these common models eventually fail, losses could crystallise across multiple systemically important firms simultaneously, tightening credit supply to the real economy or triggering broader financial contagion through loss of confidence. The FPC explicitly invokes the 2008 Global Financial Crisis as a historical parallel—a debt bubble partly fuelled by collective mispricing of securitised debt that built up because firms collectively relied on similar modelling approaches.

Existing mitigations include model risk management regulation, data governance frameworks, conduct rules, and the Senior Managers and Certification Regime (SM&CR). However, the FPC’s discussion paper DP5/22 and feedback statement FS2/23 have already flagged areas where these regulatory frameworks may need to evolve. The challenge of establishing liability for AI-driven decisions adds another layer of complexity, particularly when biased or wrongly calibrated models could affect consumer access to financial products and trigger legal challenges and financial redress obligations.

Transform complex financial stability reports into interactive experiences your team will actually engage with.

Try It Free →

Financial Markets and AI Trading Risks

The second risk area—AI in financial markets—presents a different but equally concerning threat profile. On the positive side, AI promises increased market efficiency through faster incorporation of new information, better returns for end-investors, improved risk management through superior data analysis, and even reduced market correlations as investment managers offer increasingly customised portfolio strategies. The Financial Stability Board (FSB) has noted this diversification potential as a genuine stabiliser.

However, the FPC identifies correlated positions as the top risk cited in IMF outreach on generative AI in capital markets. The mechanism is clear: widespread use of a small number of open-source or vendor-provided models and data sets, combined with general convergence on similar model designs, drives increasingly correlated positioning across market participants. During stress events, this correlation transforms from a latent vulnerability into an acute amplifier—forced unwinding of leveraged positions triggers fire-sales that the Bank has already explored through its system-wide exploratory scenario.

The most provocative risk scenario concerns advanced autonomous models in multi-agent trading environments. The FPC raises the possibility that AI models could identify and exploit weaknesses in other firms’ strategies, triggering or amplifying destabilising price movements. More troublingly, models might learn that stress events create profit opportunities and therefore actively increase the likelihood of such events occurring. The potential for emergent collusion or market manipulation—behaviours arising without the human manager’s intention or awareness—represents a novel regulatory challenge with no historical precedent. These dynamics emerge from AI models’ capacity for dynamic learning in multi-agent environments coupled with poor explainability of their strategic reasoning.

For a deeper understanding of how cybersecurity intersects with these trading risks, our analysis of the WEF Global Cybersecurity Outlook 2025 provides complementary perspectives on systemic digital threats across the financial sector.

Operational Resilience and AI Provider Concentration

The third risk area addresses the operational dimension: financial institutions’ growing reliance on a concentrated set of AI service providers. The FPC survey evidence confirms that firms generally depend on vendor-provided AI models for the most complex and powerful capabilities, particularly the latest large language models. Even institutions developing in-house models typically rely on cloud computing infrastructure and external data aggregators. Crucially, the survey indicates that third-party exposure will continue to increase as model complexity grows and outsourcing costs decline.

The systemic risk is straightforward: reliance on a small number of providers for a given service creates single-point-of-failure vulnerability. If a major disruption occurs at a key provider—and it is not feasible to migrate rapidly to alternatives—many firms could simultaneously lose the ability to deliver vital services. The FPC points to a vivid real-world precedent: the July 2024 CrowdStrike outage, when a single flawed software update caused worldwide IT disruption affecting banking and payment services across multiple institutions and countries simultaneously.

AI services may differ from traditional third-party dependencies in important ways. The complexity of foundation models makes substitution harder, and the challenges of identifying specialised and niche providers create additional fragility. The FPC’s macroprudential approach to operational resilience, developed in March 2024, provides the framework for addressing these risks. A public-private “shared responsibility model” for AI is being developed by the Bank alongside other authorities, financial firms, and non-financial participants to clarify governance responsibilities across different AI deployment architectures.

The Financial Services and Markets Act 2023 has established a new critical third parties regulatory regime, with rules published in November 2024 under supervisory statement 6/24 jointly by the Bank, PRA, and FCA. This responds directly to the FPC’s 2021 recommendation on concentration risks. The FPC now signals that certain AI third parties could emerge as potential future critical third parties under this framework, extending regulatory oversight into the AI supply chain.

Cyber Threats in an AI-Enhanced Landscape

Cyberattacks remain near the top of perceived key sources of risk to the financial system according to the Bank’s Systemic Risk Survey, and cybersecurity came near the top of perceived current AI-related risks in the 2024 AI Survey. Respondents expected this risk to grow significantly over the next three years—a consensus view that reflects both the expanding attack surface and the increasing sophistication of AI-enabled threats.

The offensive capabilities AI provides to malicious actors are substantial and varied. AI can enhance the sophistication and scale of cyberattacks against financial institutions. New attack surfaces emerge from AI systems themselves: data poisoning—the malicious manipulation of model training data—can corrupt the foundation of AI decision-making. Prompt injection attacks can manipulate public-facing AI models to extract confidential information. Deepfakes and AI-generated highly personalised text enable more effective fraud against both employees and retail customers. AI could also help those engaged in illicit financing circumvent anti-money laundering and counter-terrorism financing controls.

The systemic implications compound these firm-level threats. Widespread deployment of common AI models sharing the same cyber vulnerabilities across systemically important firms creates a system-wide attack surface. Large-scale cyberattacks could propagate through operational contagion or loss of confidence across the financial system. Recent ransomware attacks targeting financial firms and their third-party service providers have demonstrated this amplification potential. Looking further ahead, the potential combination of AI with quantum computing by cyberattackers represents a long-term threat vector that defies current defensive paradigms.

However, AI also offers significant defensive capabilities. Automated identification of malware, detection of illicit finance activity, and real-time threat monitoring could substantially improve cybersecurity posture. Survey respondents expected the benefits of AI for cybersecurity and AML to grow significantly over the next three years. This creates what the FPC terms a “technological arms race” between financial institutions and malicious actors, with the overall impact on financial stability remaining genuinely uncertain. The Cross Market Operational Resilience Group (CMORG) has established an AI Taskforce in 2024 specifically to develop scenarios around how adversaries could leverage generative AI in attacks against financial infrastructure.

Make dense regulatory reports accessible. Libertify turns financial documents into engaging interactive experiences.

Get Started →

The FPC Micro-Macro Vulnerability Framework

One of the most valuable contributions of the FPC report is its structured analytical framework for assessing AI’s financial stability implications. The micro-macro vulnerability framework provides a systematic lens that separates entity-level risks from system-level vulnerabilities, connects them through transmission channels, and traces their potential impact on financial stability outcomes.

At the microfinancial level, the framework identifies three categories of vulnerability: mismatches and exposures within individual entities, dependencies on external service provision, and flaws in risk assessment and management processes. These firm-level risks are well understood in traditional regulatory frameworks. What AI changes is the scale, speed, and opacity with which these vulnerabilities can develop and interact.

The macrofinancial dimension adds the system-level features that transform individual risks into systemic threats: correlation across firms’ positions and behaviours, interconnectedness through shared models and providers, and concentration in the AI supply chain. The FPC introduces the concept of “outcome agnostic” firms—institutions that do not factor system-level outcomes into their individual AI deployment decisions, inadvertently creating macro risks even when their micro-level risk management is sound. This collective action problem sits at the heart of the macroprudential challenge.

Transmission channels include systemic institutions whose distress could propagate losses, systemic markets whose dysfunction could disrupt capital allocation, and critical infrastructure—particularly payment systems—whose failure could paralyse the financial economy. The framework’s ultimate concern is disruption to vital financial services, especially the provision of credit to the real economy. This structured approach allows regulators and market participants alike to map AI risks systematically rather than addressing them in an ad hoc manner. Our coverage of the McKinsey State of AI 2025 explores how these institutional frameworks intersect with broader enterprise AI transformation trends.

Monitoring and International Regulatory Coordination

The FPC’s monitoring infrastructure combines five current information sources: the Bank and FCA Survey on AI in UK Financial Services, a new AI Consortium for public-private engagement, market intelligence from direct discussions with participants, supervisory intelligence from regulated firms and financial market infrastructures, and regulatory and commercial data sources. Each contributes a different perspective on the evolving AI landscape within financial services.

Planned enhancements reveal the FPC’s forward-looking ambition. Future monitoring tools under consideration include AI-related incident reporting systems, structured market intelligence gathering focused on AI developments, increased thematic supervisory activity targeting AI-specific risks, and—most ambitiously—potential future system-wide exercises designed to explore AI-related risk scenarios, which could themselves employ AI in their design and execution. This represents a significant evolution in the regulator’s analytical toolkit.

The international dimension is critical given the global nature of both AI markets and financial systems. The Bank, PRA, and FCA are actively engaged with the Financial Stability Board (FSB), which published its own AI and financial stability report in 2024 with further work planned. The G7 and G20 maintain ongoing workstreams, and both the IMF and IOSCO have dedicated programmes examining AI in capital markets. This multilateral coordination reflects the reality that AI-related financial risks do not respect national borders—a cyberattack leveraging AI against a major provider in one jurisdiction can instantly cascade across the global financial system.

Policy Implications and the Path Forward

The FPC’s policy calibration reflects a nuanced understanding that premature regulation could stifle beneficial innovation while insufficient oversight could allow systemic risks to build unchecked. The near-term focus centres on working with industry through the AI Consortium to understand changing deployment patterns, identify and share good practice for managing AI-related risks, and document AI-related incidents and “near misses” that reveal emerging vulnerabilities.

The Bank acknowledges the potential need to evolve existing guidance and regulation as AI adoption deepens. Even where microprudential measures prove sufficient for individual firms, the FPC may identify macrofinancial vulnerabilities that require additional macroprudential intervention. Two specific triggers are highlighted: if AI increases correlations or procyclical decision-making across financial markets, existing work on non-bank financial institution leverage may need adjustment; and if changes in AI market structure increase reliance on common models or providers, different forms of oversight or response may become necessary.

The overall message is one of watchful engagement rather than reactive restriction. The FPC’s secondary objective—supporting the Government’s economic policy including growth and employment—creates a mandate to ensure that regulatory approaches enable the potential 30% productivity gains that AI promises to financial services while preventing the accumulation of systemic risks that could threaten financial stability. Navigating this balance will define financial regulation for the next decade, and the FPC’s framework provides the most comprehensive institutional blueprint yet for achieving it.

For professionals tasked with implementing these insights within their organisations, the priority is clear: build AI governance that addresses not just firm-level model risk but also contributes to system-level resilience. This means diversifying AI model providers, stress-testing AI-dependent processes for correlated failure scenarios, and actively engaging with regulatory initiatives like the AI Consortium. The institutions that emerge strongest from the AI transition will be those that treat financial stability as a competitive advantage, not merely a compliance obligation.

Share regulatory intelligence across your team. Transform reports into interactive knowledge experiences with Libertify.

Start Now →

Frequently Asked Questions

What are the main AI risks to financial stability identified by the Bank of England?

The Bank of England FPC identified four main risk areas: credit mispricing from AI-driven financial decisions, correlated trading positions in financial markets, operational dependency on concentrated AI service providers, and evolving cyber threats enhanced by AI capabilities including deepfakes and data poisoning attacks.

How is AI currently being used in UK financial services?

UK financial firms primarily use AI for optimising internal processes, enhancing customer support, combatting financial crime and fraud, cybersecurity, client profiling, and regulatory compliance. Generative AI is increasingly used for code generation, information retrieval, and analytics. About 55% of use cases involve some autonomous decision-making, though only 2% are fully autonomous.

Could AI cause the next financial crisis like 2008?

The FPC draws explicit parallels between AI adoption and the 2008 crisis. Common AI model weaknesses across multiple banks could cause simultaneous risk mispricing, similar to how collective mispricing of securitised debt fuelled the 2008 crisis. The risk is amplified by concentration in AI model providers and correlated algorithmic trading positions.

What is the Bank of England doing to monitor AI risks in finance?

The Bank of England conducts regular AI surveys of financial firms, is establishing an AI Consortium for public-private engagement, monitors market intelligence and supervisory data, collaborates internationally with the FSB, IMF, and G7, and has launched an AI Taskforce through CMORG to develop cyberattack scenarios involving AI.

How much could AI boost financial sector productivity?

According to research cited in the FPC report, generative AI could deliver productivity gains of up to 30% across banking, insurance, and capital markets sectors over the next 15 years. The UK is the third largest destination for AI investment globally, positioning its financial sector for significant AI-driven transformation.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup

Our SaaS platform, AI Ready Media, transforms complex documents and information into engaging video storytelling to broaden reach and deepen engagement. We spotlight overlooked and unread important documents. All interactions seamlessly integrate with your CRM software.