AI in Capital Markets: IOSCO 2025 Report on Artificial Intelligence Risks and Regulation

📌 Key Takeaways

  • Widespread AI adoption: 49% of surveyed market participants have adopted AI, with 41% already running use cases in production environments.
  • Top use cases identified: Client communications (66.7%), algorithmic trading (63.3%), and robo-advising (60%) lead AI deployment across capital markets.
  • Cybersecurity ranked highest risk: Market participants scored cybersecurity at 4.26 out of 5 as the most pressing AI-related concern in financial markets.
  • Regulatory momentum building: 24 jurisdictions reported active engagement with AI governance, with frameworks ranging from technology-neutral to bespoke AI-specific rules.
  • Concentration risk emerging: A small number of cloud providers and AI model developers create systemic dependency risks across the financial sector.

Understanding IOSCO’s AI in Capital Markets Report

The role of AI in capital markets has entered a transformative era, and the International Organization of Securities Commissions (IOSCO) published its landmark report on artificial intelligence in capital markets in March 2025, marking a significant milestone in the global regulatory understanding of how AI technologies are reshaping financial services. Designated as IOSCOPD788, this comprehensive document represents Phase 1 of IOSCO’s two-phased approach to addressing AI in securities markets, gathering critical intelligence on use cases, risks, and challenges facing the industry.

IOSCO, which serves as the global standard-setter for securities regulation and represents over 130 jurisdictions, conducted extensive research through two primary survey instruments. The first targeted IOSCO members and self-regulatory organizations (SROs), with 24 members and 6 SROs responding from across the globe. The second survey reached market participants through the Affiliate Members Consultative Committee (AMCC), gathering responses from 184 financial institutions spanning broker-dealers, asset managers, exchanges, and other market intermediaries.

This report builds on IOSCO’s foundational 2021 guidance on the use of AI and machine learning in capital markets, which established six core measures for responsible AI deployment. The 2025 update reflects the dramatic evolution of AI capabilities since then, particularly the emergence of large language models (LLMs), generative AI (GenAI), foundation models, and increasingly autonomous agentic AI systems that have transformed the technology landscape. For financial professionals seeking to understand the intersection of regulatory guidance and emerging technologies, this report provides an essential reference framework.

AI Adoption Rates Across Global Capital Markets

The IOSCO report reveals that artificial intelligence adoption in capital markets has reached a critical inflection point. According to the AMCC survey of 184 market participants, 49% reported having adopted AI in some capacity, while 50% indicated they had invested financial resources in AI technologies. These figures demonstrate that AI in capital markets is no longer an experimental frontier but an operational reality for roughly half the industry.

Among those organizations with active AI initiatives, the deployment maturity varies significantly. A notable 41% of respondents reported having AI use cases already running in production environments, indicating that these systems are making real decisions and processing actual market data. An additional 8% reported AI use cases currently in pilot phases, suggesting a steady pipeline of new applications moving toward production deployment.

Investment levels in AI technology also paint a revealing picture of industry commitment. Of the respondents who disclosed their AI spending, 26% had invested less than one million dollars, 9% had committed between one and ten million dollars, and 4% (representing eight organizations) had invested more than ten million dollars in AI capabilities. However, a significant 61% of respondents declined to provide investment figures, suggesting either competitive sensitivity around AI spending or difficulty in isolating AI-specific expenditures from broader technology budgets.

The geographic distribution of survey respondents adds important context to these adoption figures. With 52% of AMCC respondents located in Central or South America and 15% in Africa, the data captures AI adoption trends in emerging markets as well as developed financial centers. External research corroborates the upward trajectory: Mercer research found that 90% of investment managers were either currently using AI or planning to use it in their investment research processes, while Coalition Greenwich reported that 85% of asset managers are actively using AI in some form.

Key AI Use Cases in Financial Markets

The IOSCO report provides granular data on how AI in capital markets manifests across different market participant types, revealing distinct patterns of adoption that reflect the unique operational demands of each sector. When IOSCO members and SROs were asked which AI use cases they observed among regulated entities, the results highlighted the broad applicability of artificial intelligence across virtually every function in financial services.

Communications with clients emerged as the most frequently observed AI application, cited by 66.7% of IOSCO member respondents. This encompasses AI-powered chatbots, automated client correspondence, personalized investment communications, and natural language processing systems that handle routine client inquiries. Algorithmic trading ranked second at 63.3%, reflecting the longstanding relationship between quantitative methods and trading operations, now supercharged by machine learning capabilities that can identify patterns and execute strategies at unprecedented speed.

Robo-advising and asset management placed third at 60%, demonstrating that AI-driven portfolio construction and automated investment advice have moved firmly into the mainstream. These platforms leverage machine learning algorithms to assess client risk profiles, construct diversified portfolios, and rebalance holdings based on market conditions and individual investor goals. Surveillance and fraud detection (53.3%) and internal productivity support (50%) rounded out the top five, the former leveraging pattern recognition to identify suspicious trading activity and the latter using generative AI tools to streamline back-office operations.

The data from market participants themselves tells a slightly different story, with internal productivity support (30.4%) leading their self-reported use cases, followed by market analysis (27.7%), internal GPT deployments (26.1%), and code generation (25%). This discrepancy between what regulators observe and what firms report likely reflects the broader scope of regulatory visibility versus individual firm awareness of their own AI footprint. For organizations exploring how to transform complex financial documents into engaging formats, interactive document experiences offer an innovative approach to making regulatory content accessible.

Transform complex regulatory reports like IOSCO’s into interactive experiences your team will actually engage with.

Try It Free →

AI Risks to Investor Protection and Market Integrity

Perhaps the most consequential section of the IOSCO report examines the risks that artificial intelligence poses to the three pillars of securities regulation: investor protection, market integrity, and financial stability. The analysis reveals a complex risk landscape where AI simultaneously enhances and threatens the stability of capital markets, demanding sophisticated regulatory responses that balance innovation with prudent oversight.

Malicious uses of AI represent the most immediately threatening category of risk. The report documents how cybercriminals are leveraging AI to plan, enhance, and automate attacks against financial institutions. AI-powered phishing campaigns generate more convincing social engineering attacks, while deepfake technology enables business email compromise and identity fraud at scales previously impossible. The report notes that AI systems themselves can become targets, with adversaries employing evasion attacks, data poisoning, and backdoor exploits to compromise the integrity of AI models used in trading and risk management.

The proliferation of AI-enhanced investment fraud represents a particularly insidious threat to investor protection. Research cited in the report, conducted by the Ontario Securities Commission, found that investors exposed to AI-enhanced scams invested 22% more money than those encountering conventional fraudulent schemes. This finding underscores how AI can make fraudulent operations more persuasive and professional in appearance, eroding the traditional indicators that investors and regulators use to identify suspicious activity. AI washing, where firms make misleading claims about their AI capabilities, has already prompted enforcement actions by the U.S. Securities and Exchange Commission.

When market participants were asked to rank AI-related risks on a five-point scale, cybersecurity emerged as the dominant concern at 4.26 out of 5, followed closely by data privacy and protection at 4.11. Data-related issues including bias, drift, and quality concerns scored 3.94, while model explainability and fitness-for-purpose risks came in at 3.84. Deepfake threats and liability and accountability concerns tied at 3.83, illustrating how the question of who bears responsibility when AI systems cause harm remains one of the most challenging governance issues in modern capital markets.

Concentration and Third-Party Dependency Risks in AI

One of the most forward-looking concerns raised in the IOSCO report on AI in capital markets addresses the growing concentration risks embedded in the AI supply chain. As financial institutions increasingly rely on a small number of cloud service providers, AI model developers, and data aggregators, the potential for systemic vulnerability grows proportionally. This concentration creates single points of failure that could propagate disruptions across the entire financial system.

Technological infrastructure concentration stands out as a primary concern. The global cloud computing market is dominated by a handful of providers, and the capital-intensive nature of AI development, requiring massive computing resources, specialized hardware, and vast datasets, naturally favors incumbents and raises barriers to entry. When a single cloud provider experiences an outage, the ripple effects can impact trading platforms, risk management systems, and client-facing applications simultaneously across multiple financial institutions.

Model provision concentration presents equally troubling dynamics. The development of state-of-the-art AI models, particularly large language models and foundation models, requires investments that only well-resourced technology companies can sustain. This economic reality means that a significant portion of the financial industry may ultimately depend on AI models from a very limited set of providers. The IOSCO report notes that this vertical concentration introduces correlated risks: if a widely-used AI model exhibits a systematic bias or vulnerability, the effects could be felt across thousands of market participants simultaneously.

Third-party dependency extends beyond primary vendors to encompass what the report terms “nth-party risks,” where financial institutions’ AI vendors themselves rely on their own chains of subcontractors and service providers. This layered dependency structure makes it difficult for firms to maintain full visibility into their AI supply chain, complicating risk assessments and potentially placing critical components outside the regulatory perimeter. A UK Bank of England and FCA survey found that 40% of machine learning models in capital markets institutions were implemented through vendor tools and cloud services, quantifying the scale of external dependency. The emergence of DeepSeek was noted in the report as a development whose full implications for concentration dynamics have yet to be assessed.

Regulatory Frameworks for AI in Capital Markets

The IOSCO report catalogs three distinct categories of regulatory approaches that jurisdictions have adopted to address AI in capital markets, reflecting the diversity of legal traditions, market structures, and policy priorities across the global regulatory landscape. Understanding these frameworks is essential for financial institutions operating across borders and for technology providers serving the financial sector.

The first category encompasses technology-neutral frameworks where existing financial sector regulations are applied to AI activities. This approach leverages established rules governing disclosure, risk management, internal controls, third-party outsourcing, cybersecurity, and data protection without creating AI-specific requirements. The advantage of this approach lies in its immediate applicability and the familiarity of regulated entities with existing compliance obligations. However, critics argue that technology-neutral frameworks may fail to address unique characteristics of AI systems, such as their opacity, potential for autonomous decision-making, and capacity for rapid self-modification through learning.

The second category includes jurisdictions that have developed specific legal requirements or guidance for AI use within financial services. These range from rules-based mandates to principles-based frameworks that establish regulatory expectations without prescribing detailed compliance mechanisms. For example, the European Securities and Markets Authority (ESMA) issued guidance in May 2024 on AI in retail investment services under MiFID II, while Canada’s CSA published comprehensive guidance in December 2024 addressing governance, oversight, explainability, and transparency requirements for AI in securities markets.

The third and most ambitious category features bespoke, AI-specific regulatory frameworks. The EU AI Act represents the most prominent example, establishing a risk-based classification system that applies across all sectors including financial services. The Act identifies creditworthiness evaluation and life and health insurance risk assessments as high-risk AI use cases subject to enhanced requirements. Organizations looking to stay current with evolving regulatory landscapes can explore Libertify’s interactive library for comprehensive guides on regulatory topics.

Stay ahead of evolving AI regulations. Turn dense compliance documents into engaging interactive experiences.

Get Started →

IOSCO Six Measures for AI Governance

At the foundation of IOSCO’s approach to AI governance in capital markets lie the six measures established in its 2021 guidance, which remain the authoritative international standard for responsible AI deployment in financial services. The 2025 report reaffirms these measures while acknowledging that the rapid advancement of AI capabilities, particularly generative AI and agentic systems, may necessitate updates in the forthcoming Phase 2 work.

Measure 1 requires senior management oversight, mandating that organizations designate specific senior executives as responsible for AI governance and establish documented internal governance frameworks with clear accountability lines. This principle ensures that AI deployment decisions receive appropriate strategic attention rather than being delegated entirely to technology teams. The measure recognizes that AI systems can have material impacts on investment outcomes, client relationships, and regulatory compliance, warranting board-level visibility and control.

Measure 2 addresses testing and monitoring, requiring adequate testing in segregated environments, continuous validation throughout the AI system lifecycle, and verification of appropriate behavior under both stressed and normal market conditions. This measure has gained renewed importance with the advent of generative AI systems, whose outputs are inherently non-deterministic and require more sophisticated validation approaches than traditional rule-based systems. Continuous monitoring must account for model drift, where AI performance degrades over time as market conditions evolve away from training data distributions.

Measures 3 and 4 focus on skills and expertise and third-party management respectively. The skills measure requires that organizations maintain adequate capabilities to develop, test, deploy, monitor, and oversee AI systems, with particular emphasis on ensuring that compliance and risk management functions can understand and challenge AI algorithms. The third-party management measure has become increasingly critical given the concentration risks discussed earlier, requiring clear service level agreements, performance indicators, and contractual remedies for AI services obtained from external providers.

Measures 5 and 6 address disclosure and data controls. The disclosure requirement mandates meaningful information to customers about how AI use may impact their outcomes, plus sufficient information to regulators for appropriate oversight. The data controls measure requires safeguards to ensure data quality, prevent biases, and maintain breadth sufficient for well-founded AI applications. Together, these six measures provide a comprehensive governance framework that balances the need for innovation with the imperative of protecting investors and maintaining orderly markets.

AI Model Risks: Bias, Hallucinations, and Explainability

The IOSCO report dedicates substantial attention to the technical risks inherent in AI models deployed across capital markets, recognizing that these risks have intensified significantly with the proliferation of large language models and generative AI systems. Understanding these model-level vulnerabilities is essential for risk managers, compliance officers, and technology leaders responsible for AI governance in financial institutions.

Explainability and complexity represent perhaps the most fundamental challenge facing AI in capital markets. The report highlights that it is often difficult or impossible to explain precisely how large language models compute their outputs, creating what regulators term a “black box” problem. In financial services, where regulatory frameworks frequently require that firms demonstrate the rationale behind investment decisions, risk assessments, and compliance determinations, this opacity creates a direct tension between technological capability and regulatory obligation. Ineffective disclosures, unsuitable investment decisions made by opaque systems, and hidden conflicts of interest can all result from explainability failures.

Hallucinations and confabulations, where AI systems generate plausible but factually incorrect outputs, pose unique risks in financial contexts where accuracy is paramount. An AI system that hallucinates a data point in a risk assessment, fabricates a regulatory citation, or generates a misleading market analysis could trigger cascading consequences across trading, compliance, and client advisory functions. The IOSCO report notes that the non-deterministic, probabilistic nature of modern AI outputs means that the same query can produce different answers on different occasions, challenging traditional quality assurance approaches designed for deterministic systems.

Algorithmic bias represents another critical concern, manifesting through multiple pathways including biased training data, flawed model architecture, and feedback loops that amplify initial biases over time. In capital markets, bias can lead to unfair treatment of investor groups, skewed risk assessments, and discriminatory access to financial products and services. The report warns that internet and social media data used to train AI models may perpetuate existing societal biases, while selection bias in alternative data sources can lead to systematically distorted market views. These concerns are compounded by the challenge of detecting bias in complex models, particularly when discriminatory outcomes emerge from the interaction of multiple individually innocuous features.

Data quality risks compound model-level vulnerabilities. The report raises the novel concern of “model collapse,” where models trained on synthetic data generated by other AI systems experience progressive degradation in performance. As the internet increasingly fills with AI-generated content, the risk that financial AI models may be training on corrupted or synthetic data grows. Poor, inaccurate, outdated, or irrelevant data can undermine even well-designed AI architectures, while insufficient sample sizes for rare but consequential events like financial crises can leave models unprepared for precisely the scenarios where they are most needed.

Global Jurisdictional Approaches to AI Regulation

The IOSCO report provides an invaluable comparative survey of how different jurisdictions around the world are approaching the regulation of AI in capital markets, revealing both common themes and significant divergences in regulatory philosophy and implementation. This jurisdictional mapping offers market participants operating across borders a practical guide to the evolving compliance landscape.

In the Asia-Pacific region, Hong Kong has been particularly proactive. The Hong Kong Monetary Authority (HKMA) published high-level principles for AI in banking as early as 2019, covering governance, application design, monitoring, and consumer protection. The Securities and Futures Commission (SFC) followed with a specific circular on the use of generative AI language models in November 2024, signaling heightened regulatory attention to the newest wave of AI capabilities. Japan adopted a different approach with its comprehensive “AI Guidelines for Business” in April 2024, establishing ten guiding principles applicable to AI developers, providers, and business users across all sectors including finance. Singapore’s “Project MindForge” initiative, co-created by the Monetary Authority of Singapore with banks and technology partners, represents a collaborative approach building on the country’s established FEAT principles of Fairness, Ethics, Accountability, and Transparency.

In the Americas, the regulatory landscape reflects the distinct approaches of different national authorities. The United States has seen activity across multiple agencies: the CFTC issued a staff advisory on AI use in regulated markets in December 2024 and had earlier published a request for comment on the topic, the SEC has brought enforcement actions against firms making false or misleading AI claims and included AI integration in its examination priorities, and the Treasury Department issued a comprehensive report on AI in financial services in December 2024. Canada’s CSA guidance of December 2024 provides a principles-based framework focusing on governance, oversight, explainability, transparency, and conflicts of interest. Brazil’s draft Bill 2.338/2023 represents a legislative approach to establishing an AI framework.

Europe offers the most ambitious regulatory architecture through the EU AI Act, which creates a comprehensive, risk-based framework applicable across all sectors. The Act’s identification of specific high-risk financial use cases, including creditworthiness evaluation and insurance risk assessment, establishes binding requirements that go beyond voluntary guidance. Individual European regulators have supplemented this with sector-specific work: ESMA’s guidance on AI in retail investment services under MiFID II, the Netherlands’ AFM study on machine learning in algorithmic trading, Greece’s national framework law for emerging technologies, and the UK FCA’s AI Lab initiative for engagement and innovation support. The IOSCO report found that all but one survey respondent had engaged with market participants on AI, with 15 of 27 providing oral or written guidance and 6 offering product trial or sandbox environments.

Future of AI in Capital Markets: What Comes Next

The IOSCO report concludes with forward-looking considerations that signal the direction of international regulatory efforts and underscore the urgency of continued engagement between regulators, market participants, and technology providers. As Phase 1 of IOSCO’s work, this report establishes the informational foundation for Phase 2, which will potentially develop additional tools, recommendations, and regulatory considerations for AI in capital markets.

Five priority areas emerge from the report’s forward-looking analysis. First, educating investors about AI-related investment fraud has become a pressing need as deepfakes and AI-enhanced scams grow more sophisticated. The finding that investors commit 22% more capital to AI-enhanced scams than conventional ones demonstrates the urgency of public awareness campaigns and investor education initiatives. Second, strengthening information sharing on AI risks among regulators will be essential as the technology evolves faster than any single jurisdiction can monitor independently.

Third, enhancing cooperation in the supervision of market participants regarding AI use will require new collaborative frameworks, particularly for firms operating across multiple jurisdictions with varying regulatory requirements. The report notes that approximately one-third of respondents already collaborate with overseas authorities on AI matters, though three jurisdictions reported challenges in cross-border cooperation. Fourth, supporting member efforts through technical assistance and capacity building recognizes that regulatory capabilities vary significantly across IOSCO’s membership, and effective AI oversight requires technical sophistication that some authorities may need help developing.

Fifth, engaging with international organizations including the Financial Stability Board (FSB), OECD, and others will ensure that securities regulation remains aligned with broader global frameworks for AI governance. The intersection of AI with financial stability concerns elevates this topic beyond securities regulation alone, requiring coordination across the full spectrum of financial regulatory authorities.

The emergence of agentic AI systems, which can operate with increasing autonomy and make complex decisions without direct human supervision, represents perhaps the most significant challenge on the horizon. These systems push the boundaries of existing governance frameworks designed around human oversight and accountability, raising fundamental questions about liability, transparency, and control that will need to be addressed in Phase 2 and beyond. For professionals who need to absorb and act on these rapidly evolving regulatory developments, transforming dense policy documents into interactive learning experiences can dramatically improve comprehension and retention.

Make regulatory content accessible to every stakeholder. Transform any document into an interactive experience in seconds.

Start Now →

Frequently Asked Questions

What is the IOSCO 2025 report on AI in capital markets?

The IOSCO 2025 report (IOSCOPD788) is a comprehensive analysis of how artificial intelligence is being used across capital markets globally. Published in March 2025, it examines AI adoption rates, use cases across broker-dealers, asset managers, and exchanges, identifies key risks to investor protection, market integrity, and financial stability, and reviews regulatory responses from 24 jurisdictions worldwide.

How widely is AI adopted in capital markets according to IOSCO?

According to IOSCO’s AMCC survey of 184 market participants, 49% reported having adopted AI and 50% had invested in it. Of those with AI use cases, 41% were already in production while 8% remained in pilot phase. The most common applications include internal productivity support (30.4%), market analysis (27.7%), and code generation (25%).

What are the biggest AI risks in capital markets identified by IOSCO?

IOSCO identifies four broad risk categories: malicious uses of AI (cyberattacks, deepfakes, fraud), AI model and data risks (bias, hallucinations, explainability), concentration and third-party dependency risks (cloud provider concentration, vendor lock-in), and human-AI interaction challenges (accountability gaps, over-reliance). Market participants ranked cybersecurity (4.26/5) and data privacy (4.11/5) as the top concerns.

How are regulators responding to AI in financial markets?

IOSCO found three categories of regulatory response: technology-neutral frameworks applying existing rules to AI, specific legal requirements or guidance for AI use, and bespoke AI-specific frameworks. Notable examples include the EU AI Act, Hong Kong’s SFC circular on GenAI, Canada’s CSA guidance, and the US CFTC staff advisory. All but one survey respondent had engaged with market participants on AI governance.

What AI use cases are most common among financial market participants?

According to IOSCO member observations, the top AI use cases are communications with clients (66.7%), algorithmic trading (63.3%), robo-advising and asset management (60%), surveillance and fraud detection (53.3%), and internal productivity support (50%). Broker-dealers focus on client communications and trading, while asset managers prioritize robo-advising and investment research.

What does IOSCO recommend for managing AI risks in finance?

IOSCO’s existing 2021 guidance includes six measures: senior management oversight with clear accountability, adequate testing and continuous monitoring, sufficient skills and expertise across teams, robust third-party management with clear SLAs, meaningful disclosure to customers and regulators, and strong data controls to prevent bias. The 2025 report signals Phase 2 work on additional tools and recommendations.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup