Bank of England AI in the Financial System — Financial Stability Risks, Cyber Threats and Macroprudential Response
Table of Contents
- Why the Bank of England Is Raising the Alarm on AI in Finance
- AI Adoption in UK Financial Services — Current Landscape
- AI in the Financial System — Four Stability Risks the FPC Monitors
- Systemic Credit Risk — When AI Models Get Lending Decisions Wrong
- AI in Financial Markets — Herding, Fire-Sales and Autonomous Trading
- AI Provider Concentration — What Happens When Key Services Go Down
- AI-Powered Cyber Threats — Deepfakes, Data Poisoning and Prompt Injection
- The FPC Monitoring Playbook — Surveys, AI Consortium and System-Wide Exercises
- International Coordination on AI in the Financial System — FSB, IMF and G7
- What Comes Next — Macroprudential Response if AI Risks Escalate
📌 Key Takeaways
- Four Critical Risk Areas: The FPC identifies AI risks in core financial decisions, market stability, provider concentration, and cyber threats as the primary financial stability concerns.
- 55% Autonomous Decision-Making: Over half of AI use cases in financial firms involve some autonomous decision-making, yet 50% of firms only partially understand their AI technologies.
- 30% Productivity Potential: Generative AI could deliver up to 30% productivity gains across banking, insurance, and capital markets over the next 15 years.
- 2008 Parallels: Common weaknesses in widely used AI models could cause simultaneous credit mispricing across firms, echoing the systemic failures of the Global Financial Crisis.
- New Critical Third-Party Regime: The Financial Services and Markets Act 2023 enables designation of AI providers as critical third parties whose disruption could threaten financial stability.
Why the Bank of England Is Raising the Alarm on AI in Finance
The Bank of England’s Financial Policy Committee (FPC) has published its most comprehensive assessment of AI’s impact on financial stability, recognizing that artificial intelligence is poised to transform UK financial services in ways that demand macroprudential attention. The report, Financial Stability in Focus: AI in the Financial System (Issue No. 11, April 2025), maps the intersection of rapid AI adoption with systemic risk — and the picture is both promising and deeply concerning.
The UK stands as the third largest destination for AI investment globally, positioning its financial sector at the forefront of AI adoption. One study estimates that generative AI could bring productivity gains of up to 30% to banking, insurance, and capital markets over the next 15 years. But the FPC’s mandate is not to celebrate efficiency gains — it is to identify where AI could amplify risks to the stability of the entire financial system.
What makes this report essential reading is its macroprudential lens. While individual firms may deploy AI responsibly, the FPC is concerned about emergent risks that arise when many firms adopt similar AI models, data sources, and providers simultaneously. This collective behavior can create systemic vulnerabilities invisible at the firm level but catastrophic at the system level — a pattern that has triggered every major financial crisis from 2008 onwards. As regulators worldwide grapple with AI’s implications for financial sector stability, the Bank of England’s framework provides a rigorous template for what to watch and when to act.
AI Adoption in UK Financial Services — Current Landscape
The Bank of England’s 2024 AI Survey reveals a financial sector in the early stages of what promises to be a fundamental transformation. The top near-term use cases over the next three years center on optimizing internal processes, enhancing customer support, and combating financial crime. Generative AI is already being deployed for code generation, information search and retrieval, and document summarization — the operational backbone of modern finance.
The adoption data tells a story of cautious but accelerating deployment. 55% of respondents’ AI use cases involve some form of autonomous decision-making, signaling that AI is moving beyond pure automation into judgment territory. However, only 2% of use cases are described as fully autonomous — most retain human oversight at critical decision points. This transition zone between human-directed and autonomous operation is precisely where the most complex risks emerge.
Perhaps the most striking finding is that approximately 50% of respondents report having only a partial understanding of the AI technologies they use. This knowledge gap creates a fundamental tension: firms are deploying increasingly sophisticated AI systems while lacking the technical depth to fully assess their limitations, failure modes, and systemic implications. Established AI techniques like gradient boosting and decision trees already underpin credit scoring, insurance pricing, and provisioning models. Insurers integrate telematics data with AI-driven pricing, while banks experiment with AI-enhanced underwriting for SME lending.
The Bank of England itself is not immune to AI’s pull. Internal trials of generative AI tools showed significant productivity benefits in document summarization, meeting notes, and code generation. These tools are now being rolled out across the Bank under the TRUSTED framework — requiring AI solutions to be Targeted, Reliable, Secure, Understood, Ethical, stress-Tested, and Durable.
AI in the Financial System — Four Stability Risks the FPC Monitors
The FPC organizes its analysis around four interconnected risk areas that could individually or collectively threaten financial stability. These are not theoretical concerns — each is grounded in observable trends and recent precedents that demonstrate how AI amplifies existing vulnerabilities while creating entirely new ones.
The four risk areas form an interconnected web: AI in core financial decision-making could drive systemic credit mispricing; AI in financial markets could amplify herding and fire-sales; operational concentration in AI service providers creates single points of failure; and the evolving cyber threat environment adds an adversarial dimension where AI itself becomes both weapon and shield. The FPC’s framework maps each risk along two dimensions — the extent to which it represents a microfinancial versus macrofinancial vulnerability, and whether AI is the primary driver or an amplifier of existing risks.
What distinguishes this analysis from typical risk assessments is its emphasis on emergence — risks that are invisible or manageable at the individual firm level but become systemic when many firms exhibit similar behaviors. A single bank using AI for credit decisions poses manageable risk. Every major bank using similar AI models trained on similar data, making correlated decisions in similar market conditions, poses a threat to the entire financial system’s stability.
Systemic Credit Risk — When AI Models Get Lending Decisions Wrong
The first and arguably most consequential risk area addresses AI’s growing role in banks’ and insurers’ core financial decision-making. The FPC draws an explicit parallel to the 2008 Global Financial Crisis, where widespread reliance on flawed risk models led to systemic mispricing of securitized products and cascading institutional failures.
The mechanism is straightforward but dangerous: if multiple financial institutions adopt similar AI models or train on overlapping datasets, a common weakness could cause many firms to simultaneously misestimate risks, misprice credit, and misallocate capital. Unlike traditional model risk where each firm builds bespoke models with unique assumptions, the AI ecosystem is increasingly dominated by a small number of foundation models and training methodologies. This convergence creates a new vector for systemic vulnerability.
Model risk compounds the problem. AI systems — particularly deep learning models — operate as effective black boxes where the relationship between inputs and outputs resists human interpretation. The FPC notes that this lack of explainability, combined with increasing autonomy, could lead to financial risk-taking that firms themselves do not fully understand. When approximately half of all firms acknowledge only partial understanding of their AI technologies, the foundation for sound risk management is fragile.
Beyond credit mispricing, the FPC identifies conduct risk as a significant concern. Biased or wrongly calibrated AI models could systematically deny credit or insurance to certain demographics, creating legal liability and potential mass financial redress claims. The question of liability — who is ultimately responsible when an AI model makes a harmful decision — remains practically unresolved across most jurisdictions. This ambiguity could delay corrective action during a stress event, as firms, vendors, and regulators debate accountability while losses mount.
Transform complex financial stability reports into interactive experiences your compliance team will actually engage with.
AI in Financial Markets — Herding, Fire-Sales and Autonomous Trading
The second risk area examines AI’s transformation of financial market dynamics. A striking data point anchors the analysis: over 50% of all patents filed by high-frequency and algorithmic trading firms now relate to AI, according to IMF data. This signals a fundamental shift in how markets process information, execute trades, and respond to stress.
The FPC’s core concern is correlated positioning. When many market participants use similar AI models or datasets, their trading strategies converge — they buy the same assets, hold similar positions, and react to the same signals. During normal market conditions, this convergence may be invisible. During stress events, it triggers simultaneous deleveraging as correlated models reach the same conclusions at the same time, amplifying fire-sales and exacerbating price dislocations.
More troubling is the possibility that advanced autonomous AI models could develop exploitative behaviors. Models optimized for profit maximization might learn to identify and exploit weaknesses in other firms’ strategies, triggering or amplifying destabilizing price movements. The FPC raises the scenario where AI models learn that stress events create profit opportunities and actively take positions that increase the likelihood of such events occurring — a dynamic that would fundamentally undermine market stability.
The report also flags a subtle but significant risk: AI-facilitated collusion without human intention. When multiple AI models learn dynamically from the same market environment, they can converge on tacitly collusive strategies — coordinating prices or positions without any human designing or even recognizing the coordination. This emergent behavior falls outside traditional market manipulation frameworks, which assume human intent. As herding and market concentration was the top risk cited in recent IMF outreach to stakeholders on generative AI in capital markets, the FPC’s concerns align with global regulatory consensus.
AI Provider Concentration — What Happens When Key Services Go Down
The third risk area addresses a vulnerability that became viscerally real in July 2024 when the CrowdStrike worldwide IT outage disrupted operations across financial institutions globally. The FPC uses this precedent to illustrate how concentrated third-party dependencies create systemic risk — and argues that AI service providers are creating an even more concentrated dependency structure.
The most computationally powerful foundation models are developed by a small number of firms. Financial institutions increasingly depend on these providers not just for off-the-shelf tools but for core capabilities in risk assessment, fraud detection, and customer interaction. If a key AI service provider experiences a prolonged outage, many firms could simultaneously lose the ability to deliver vital services, including time-critical payments processing.
Migration risk compounds the problem. Unlike traditional IT services where switching providers is feasible (if costly), AI model dependencies create deep integration challenges. Fine-tuned models, training data pipelines, and operational workflows are often tightly coupled to specific providers. Rapid migration to alternatives during a crisis may not be technically feasible, leaving firms stranded on failed infrastructure.
The regulatory response is already taking shape. The Financial Services and Markets Act 2023 established a critical third-party regime, with the Bank of England, PRA, and FCA jointly publishing rules in November 2024 (supervisory statement 6/24). AI data and model providers could emerge as potential future critical third parties under this regime. The shared responsibility model for AI being developed through public-private collaboration seeks to clarify how risk allocation should work between third-party providers and client firms.
Help your risk and compliance teams engage with regulatory AI guidance — make every document count.
AI-Powered Cyber Threats — Deepfakes, Data Poisoning and Prompt Injection
The fourth risk area addresses a dimension where AI is simultaneously the threat, the target, and the defense. Cyberattacks remain near the top of the Bank of England’s Systemic Risk Survey as perceived key sources of risk to the financial system, and AI is fundamentally changing the threat landscape.
On the offensive side, AI enables attackers to scale and sophisticate their operations dramatically. Deepfakes powered by generative AI make social engineering attacks more convincing and harder to detect. Personalized phishing campaigns generated by language models can target thousands of employees simultaneously with customized messages that bypass traditional filters. Data poisoning — the deliberate manipulation of model training data — creates a new category of attack that corrupts the decision-making foundations of financial institutions from within.
Prompt injection represents a particularly insidious threat vector specific to AI-powered systems. Attackers manipulate customer-facing AI models through carefully crafted inputs, potentially extracting confidential information or causing the model to take unauthorized actions. As financial institutions deploy AI chatbots, virtual assistants, and automated advisory services, each interface becomes a potential entry point for prompt injection attacks.
The systemic dimension emerges from shared vulnerabilities. If many financial institutions use AI models built on common architectures or trained on similar datasets, a vulnerability discovered in one model may be exploitable across all of them. This creates the potential for large-scale cyberattacks amplified through operational contagion — a single exploit affecting multiple systemic institutions simultaneously. The CMORG AI Taskforce, established in 2024, is developing scenarios for how malicious actors could use generative AI against financial services, while the UK National Cyber Security Centre continues to assess the evolving threat landscape.
The FPC Monitoring Playbook — Surveys, AI Consortium and System-Wide Exercises
The Bank of England’s monitoring approach draws on five complementary information sources, recognizing that no single channel can capture the full picture of AI-related financial stability risks. The AI Survey provides quantitative data on adoption patterns, risk perceptions, and AI governance practices across the financial sector. The next iteration will expand coverage to underrepresented sectors and adapt questions to the evolving risk environment.
The AI Consortium being established represents a new model for public-private engagement on AI capabilities, development, deployment, and use in UK financial services. This platform aims to bridge the knowledge gap between regulators and practitioners, ensuring that supervisory frameworks keep pace with technological change. Complementing formal channels, the FPC conducts regular market intelligence discussions with participants most advanced in AI adoption, capturing insights that quantitative surveys cannot.
Supervisory intelligence from PRA and FCA-regulated firms and Bank-regulated financial market infrastructures provides granular, firm-level data on AI implementation challenges, near-misses, and emerging risks. Combined with regulatory and commercial data sources, this multi-channel approach creates a monitoring infrastructure designed to detect AI-related vulnerabilities before they crystallize into systemic events.
Looking ahead, the FPC outlines more ambitious monitoring tools: AI-related incident reporting to track failures and near-misses systematically, structured AI market intelligence gathering, increased thematic supervisory activity focused on specific AI applications like lending models, and potential system-wide exercises to stress-test the financial system’s resilience to AI-related scenarios. The FPC notes that these system-wide exercises might themselves use AI — a recursive approach that reflects how deeply embedded the technology is becoming.
International Coordination on AI in the Financial System — FSB, IMF and G7
AI risks in finance are inherently cross-border. An AI model developed in one jurisdiction, trained on global data, and deployed across multiple countries creates risk vectors that no single regulator can address alone. The FPC recognizes this reality and is actively engaged with multiple international bodies to coordinate the macroprudential response.
The Financial Stability Board (FSB) published its landmark report on AI and financial stability in 2024 and is consulting on enhanced public disclosure requirements for aggregate market positioning and liquidity — directly addressing the herding and concentration risks the FPC identifies. The IMF and IOSCO have both undertaken significant work on AI in capital markets, with the IMF’s finding that over 50% of HFT patents now relate to AI providing key evidence for the FPC’s market stability analysis.
The G7 and G20 maintain active AI governance work streams, while the G7 Cyber Experts Group addresses the cybersecurity dimension specifically. This multi-layered international architecture reflects the complexity of AI-related financial risks: they span prudential regulation, market conduct, operational resilience, and cybersecurity — each with different institutional leads and different timelines for policy action.
For the UK specifically, maintaining alignment with international standards while preserving the flexibility to act unilaterally on emerging risks is a delicate balance. The FPC signals its intention to be both a contributor to global standards and an early mover where UK-specific vulnerabilities demand it — particularly given the UK’s outsized role as a global financial center and third-largest AI investment destination.
What Comes Next — Macroprudential Response if AI Risks Escalate
The most consequential section of the report outlines the conditions under which the FPC would move from monitoring to intervention. If AI significantly increases correlations or procyclical decision-making in financial markets, the FPC’s existing work on non-bank financial institution (NBFI) leverage may need to be adjusted to account for AI-driven amplification effects.
Changes to market structure driven by AI — particularly greater reliance on common models and providers — may require entirely new forms of oversight or response that go beyond current regulatory frameworks. The FPC explicitly acknowledges that existing guidance and regulation may need to evolve to support safe AI adoption while maintaining financial stability.
The report also highlights the importance of the Senior Managers and Certification Regime (SM&CR) as a supervisory tool for ensuring individual accountability on AI-related issues. This framework assigns personal responsibility to named senior managers for AI governance within their firms — creating incentives for proper risk management that go beyond institutional compliance to individual career consequences.
The Bank of England’s approach reflects a regulatory philosophy of proportionate vigilance: building monitoring infrastructure now, engaging with industry proactively, and preserving the policy space to intervene decisively if risks escalate. The FPC is not calling for AI to be restricted — it recognizes the transformative economic benefits, including the potential 30% productivity gains across financial services. Rather, it is building the institutional capacity to distinguish between beneficial AI adoption and AI deployment patterns that could threaten the stability of the financial system that serves 67 million UK residents.
Turn Bank of England reports into interactive experiences — help your team stay ahead of AI regulatory developments.
Frequently Asked Questions
What AI risks to the financial system does the Bank of England identify?
The Bank of England’s Financial Policy Committee identifies four key AI financial stability risks: AI in core financial decision-making that could cause systemic credit mispricing similar to 2008, AI in financial markets driving correlated positioning and amplified fire-sales, operational concentration risks from dependency on a small number of AI service providers, and an evolving cyber threat environment where AI enhances attacker capabilities through deepfakes, data poisoning, and prompt injection attacks.
How could AI cause a financial crisis similar to 2008?
According to the Bank of England, common weaknesses in widely used AI models could cause many financial firms to simultaneously misestimate risks, misprice credit, and misallocate capital. If multiple banks rely on similar AI models or datasets for lending decisions, a shared flaw could create systemic vulnerability analogous to the 2008 Global Financial Crisis, where widespread mispricing of securitized products triggered a cascade of failures across interconnected institutions.
What percentage of financial firms use autonomous AI decision-making?
The Bank of England’s 2024 AI Survey found that 55% of respondents’ AI use cases involve some form of autonomous decision-making, though only 2% are described as fully autonomous. Additionally, approximately 50% of respondents reported having only a partial understanding of the AI technologies they use, raising concerns about firms’ ability to manage AI-related risks effectively.
What is the Bank of England’s AI monitoring approach for financial stability?
The FPC monitors AI risks through five channels: an enhanced AI Survey with broader sector coverage, a new AI Consortium for public-private engagement, market intelligence from advanced AI adopters, supervisory intelligence from PRA and FCA-regulated firms, and regulatory and commercial data sources. Future tools include AI-related incident reporting, system-wide exercises, and potential macroprudential responses if AI significantly increases market correlations.
How does AI affect cybersecurity risks in finance?
AI creates a bidirectional impact on financial cybersecurity. On the defensive side, AI improves automated detection of malware, fraud, and illicit finance activity. On the offensive side, AI enhances attacker capabilities through more convincing deepfakes, personalized phishing campaigns, data poisoning of training datasets, and prompt injection attacks against customer-facing AI models. The Bank of England notes that cyberattacks remain near the top of perceived systemic risks in their latest risk survey.
What is the UK’s critical third-party regime for AI providers?
The Financial Services and Markets Act 2023 established a new critical third-party regime. The Bank of England, PRA, and FCA jointly published rules in November 2024 allowing designation of critical third parties whose disruption could threaten financial stability. AI data and model providers could emerge as potential future critical third parties under this regime, addressing the concentration risk of financial firms depending on a small number of AI service providers.