AI and Quantum in Financial Services: Oliver Wyman Transformation Report
Table of Contents
- AI and Quantum Convergence in Finance
- Evolving AI Use Cases in Financial Services
- AI Deployment Data: Where Banks Invest
- Building a Comprehensive AI Operating Model
- AI Governance and Risk Management
- Quantum Technology in Financial Services
- Post-Quantum Cryptography Preparedness
- AI Policy and Public-Private Partnerships
- Five Strategic Calls to Action
- Preparing for the AI-Quantum Future
📌 Key Takeaways
- 78% Fraud Detection: Fraud detection is the most widely deployed predictive AI use case, with 78% of surveyed financial institutions using it actively.
- Experimentation to Value: Financial services are shifting from AI experimentation to measurable value delivery, with generative AI moving into production.
- Quantum Security Now: HSBC is already piloting quantum key distribution for FX transactions — post-quantum cryptography preparation is urgent.
- Operating Model First: Technology alone is insufficient; success requires a comprehensive AI operating model with governance, risk management, and organizational readiness.
- Global AI Regulation: 12 countries and regions now have active AI policies, with public-private partnerships emerging as vital accelerators of safe adoption.
AI and Quantum Convergence in Financial Services
The convergence of artificial intelligence and quantum computing represents one of the most significant technological inflection points in the history of financial services. Oliver Wyman’s joint report with the Global Finance and Technology Network (GFTN), drawing on insights from the Singapore Fintech Festival 2024, maps the current landscape of AI and quantum technology adoption across banking and financial services — revealing an industry at a pivotal juncture between experimentation and enterprise-scale transformation.
The report, authored by Gaurav Kwatra, Kapil Sabharwal, Adrielle Lim, and Poon Yi Lin, makes a compelling case that global trends are propelling adoption of advanced digital tools in financial services at an unprecedented pace. Leaders who fail to prepare their organizations for this shift risk being left behind as competitors capture efficiency gains, superior customer experiences, and new revenue streams enabled by AI and quantum technologies.
What makes this analysis particularly valuable is its grounding in real data. Based on Oliver Wyman’s Asia Pacific Generative AI Benchmarking Survey of 23 financial institutions conducted in November 2024, the report provides concrete deployment metrics that move beyond hype to reveal where financial institutions are actually investing — and where critical gaps remain. As our analysis of the Oliver Wyman Known Unknowns debate on AI in financial services shows, these technology adoption patterns sit within a broader strategic context of industry-wide transformation.
Evolving AI Use Cases Across Financial Services
The report identifies a fundamental shift in how financial institutions approach AI. Where generative AI was previously the domain of innovation labs and proof-of-concept projects, organizations are now transitioning from experimentation to true value-focused approaches that deliver measurable business outcomes. This maturation is critical — it marks the point at which AI moves from a cost center to a value driver.
AI’s potential to increase efficiency, improve effectiveness, and elevate customer experiences is emerging across use cases of varying scales. From automating routine compliance processes to powering real-time fraud detection systems, AI is reshaping operations at every level of the financial services stack.
However, the report sounds an important warning: predictive AI capabilities remain insufficiently tapped for advanced applications, even within organizations at the forefront of technology adoption. Hyper-personalized products and services — one of AI’s most promising value propositions for financial services — remain largely unrealized. This suggests that the industry is capturing the low-hanging fruit of AI (automation, fraud detection, basic analytics) while leaving significantly more value on the table in areas that require deeper integration of AI into product design and customer engagement strategies.
As former MAS Managing Director Ravi Menon noted at the festival: “AI is beginning to make significant inroads into financial services. We’re seeing both AI-powered innovation and potentially AI-driven risks.” This dual nature — opportunity alongside risk — defines the current moment for financial institutions navigating AI adoption.
AI Deployment Data: Where Financial Institutions Actually Invest
The report’s survey data provides a revealing snapshot of predictive AI deployment across financial institutions. The findings organize deployment into three categories: growth and retention, productivity and operations, and risk management — with risk management emerging as the clear leader.
In risk management, fraud detection dominates at 78% deployment — the highest rate across all categories. This reflects both the clear ROI of AI-powered fraud prevention and the regulatory pressure to maintain robust detection systems. Risk modeling follows at 57%, with KYC and verifications at 43%, credit underwriting at 39%, liquidity forecasting at 35%, and audit and compliance trailing at just 17%.
In productivity and operations, data and machine learning leads at 52%, followed by process automation at 48%, error detection at 26%, and human resources at 17%. These figures suggest that while operational AI is gaining traction, the most sophisticated applications — predictive workforce planning, intelligent process optimization — remain in early stages.
In growth and retention, marketing leads at 48%, customer service at 39%, sales at 30%, and client attrition management at 22%. The relatively low adoption in customer-facing applications, particularly client retention, represents a significant missed opportunity. Given that acquiring new customers typically costs five to seven times more than retaining existing ones, the underdeployment of AI in retention strategies suggests financial institutions are not yet leveraging AI where it could deliver the greatest competitive advantage.
Transform Oliver Wyman’s AI survey data into an interactive report your board will engage with.
Building a Comprehensive AI Operating Model for Finance
One of the report’s most important contributions is its emphasis that merely implementing the latest technology is not enough to unlock the full potential of AI. Success requires a comprehensive operating model that goes far beyond technology deployment to encompass organizational design, governance, and cultural change.
The Oliver Wyman framework identifies five pillars of a successful AI operating model: solving for the customer, pragmatic prioritization, governance, effective risk management, and organizational readiness. The most successful organizations understand the importance of getting these building blocks right before scaling AI across the enterprise.
The customer-first principle is foundational. The report argues that AI adoption should begin with user-first and customer-first principles, prioritizing value creation rather than adopting AI for its own sake. This may seem obvious, but in practice, many financial institutions have invested in AI capabilities driven by technology availability or competitive pressure rather than clearly identified customer needs.
Pragmatic prioritization means focusing AI investment on use cases with clear, measurable business value rather than pursuing every possible application simultaneously. The most effective institutions create structured prioritization frameworks that evaluate AI opportunities based on customer impact, revenue potential, implementation complexity, and risk profile.
This resonates with findings from our analysis of the McKinsey State of AI 2025, which similarly emphasizes that AI value capture depends on organizational readiness as much as technical capability.
AI Governance and Risk Management in Banking
The governance dimension deserves particular attention. The report emphasizes that success hinges on clearly defining the roles of model governance, data governance, and AI governance teams within what it describes as a new, complex, and overlapping ecosystem. These three governance functions are distinct but interdependent, and many institutions struggle to coordinate them effectively.
Model governance ensures that AI models perform as intended, remain accurate over time, and comply with regulatory requirements. Data governance addresses the quality, integrity, and appropriate use of the data that feeds AI systems. AI governance — the newest and often least mature function — addresses the broader organizational, ethical, and strategic questions about how AI is deployed across the institution.
Establishing fundamental controls from the outset is crucial for building customer trust. The report argues that trust is not something that can be bolted on after AI systems are deployed — it must be designed into the governance framework from day one. This includes explainability (can the institution explain to customers and regulators why an AI system made a particular decision?), fairness (do AI systems treat all customers equitably?), and accountability (who is responsible when an AI system makes an error?).
The risk management culture must extend beyond compliance to embrace a proactive, continuous learning approach. An integrated AI risk management framework should not only identify and mitigate emerging risks but also encourage a culture of continuous learning and adaptation. This is particularly important as AI capabilities evolve rapidly — risk frameworks that were adequate six months ago may already be obsolete.
Quantum Computing Technology in Financial Services
The report’s treatment of quantum technology provides a balanced assessment of a field that generates significant hype alongside genuine transformative potential. Although still in its early stages, quantum technology presents both transformative opportunities and significant security challenges for the financial services sector.
On the opportunity side, quantum computing has the potential to accelerate machine learning and significantly enhance AI’s performance. Quantum algorithms could solve optimization problems — portfolio optimization, risk scenario modeling, derivatives pricing — that are computationally intractable for classical computers. The convergence of quantum computing with AI could unlock capabilities that neither technology alone can deliver.
However, the report notes that limited applications have been realized to date, as significant technological breakthroughs are necessary to unlock the full potential. Most financial institutions are currently in the exploration phase, evaluating the potential and day-to-day role of quantum technology rather than deploying it in production.
The HSBC example stands out as a notable exception. Colin Bell, CEO of HSBC, shared at the Singapore Fintech Festival: “We’ve joined a distribution network in the UK for quantum key distribution. We generate quantum keys to protect FX transactions and have proven through hardware we can distribute these keys around the network.” This represents a real-world pilot of quantum technology already delivering security value at a major global bank. Research from NIST’s quantum computing program confirms that quantum-safe cryptography standards are now being finalized for enterprise adoption.
Share quantum computing research with your technology and security teams through interactive experiences.
Post-Quantum Cryptography: Banking’s Urgent Imperative
The security dimension of quantum technology is perhaps more urgent than the opportunity dimension. Experts at the Singapore Fintech Festival emphasized the importance of understanding the threat to current cryptographic methods, urging financial institutions to begin preparing for a post-quantum world.
The threat is conceptually simple but operationally complex. Current encryption systems — including those protecting trillions of dollars in daily financial transactions — rely on mathematical problems that classical computers cannot solve in reasonable time. Quantum computers, once sufficiently powerful, could break these encryption systems, potentially exposing banking communications, transaction records, and customer data.
The “harvest now, decrypt later” attack vector makes this threat immediate even though fully capable quantum computers do not yet exist. Adversaries can capture encrypted financial data today and store it until quantum computers become available to decrypt it. For financial data with long-term sensitivity — strategic investment positions, M&A communications, customer identity information — this means the threat window is already open.
Preparing for post-quantum cryptography is a multi-year journey that financial institutions should begin now. HSBC’s Colin Bell emphasized this point: “Preparing for post-quantum cryptography is a journey every big institution needs to go on.” The preparation involves inventorying cryptographic assets, assessing quantum vulnerability across systems, testing quantum-resistant algorithms, and developing migration roadmaps — all while maintaining current security operations. The NIST Post-Quantum Cryptography Standardization project provides the foundational standards that institutions should begin implementing.
AI Policy and Public-Private Partnership Evolution
The regulatory landscape for AI in financial services is evolving rapidly, and the report maps this evolution across twelve countries and regions with active AI public policies and regulations: Canada, the United States, the United Kingdom, Europe (EU-level), India, China, South Korea, Thailand, Singapore, Indonesia, Malaysia, and Australia.
This global regulatory mosaic creates both challenges and opportunities for financial institutions. Institutions operating across multiple jurisdictions must navigate different regulatory frameworks, compliance requirements, and cultural expectations around AI use. However, the report identifies an encouraging trend: policymakers worldwide are recognizing the need to balance innovation and risk management to accelerate AI adoption, rather than simply restricting it.
Public-private partnerships have emerged as a vital accelerator of safe AI adoption. Through funding and support of various initiatives, these partnerships create sandboxes for experimentation, establish best practices for responsible AI deployment, and build the institutional knowledge needed for effective regulation. The Singapore model, where the Monetary Authority of Singapore has actively fostered AI innovation while maintaining rigorous risk management standards, is highlighted as a reference case.
There is an impetus for regulations to serve as catalysts for progress rather than inhibitors to innovation. This represents a significant shift from the early days of AI regulation, when many in the industry feared that regulation would stifle innovation. The Bank for International Settlements’ research on AI in central banking further demonstrates how regulators are evolving their approach to AI governance. Increasingly, both regulators and industry leaders recognize that well-designed regulation can actually accelerate adoption by building the trust and confidence that customers, investors, and boards need to support AI investment. Our analysis of NIST’s cybersecurity framework for AI provides deeper insight into how regulatory standards are shaping responsible AI deployment.
Five Strategic Calls to Action for Financial Leaders
The report concludes with five clear calls to action that provide a practical framework for financial services leaders navigating the AI and quantum transformation:
First, chase the value of AI and quantum technology, not the hype. Focus on initiatives that deliver tangible business value rather than pursuing technology for its own sake. Leverage pilot projects to validate outcomes and guide investment decisions. This means rigorous measurement of AI ROI, honest assessment of which use cases deliver and which do not, and willingness to kill projects that do not meet value thresholds.
Second, collaborate with industry players and regulators. Develop partnerships with industry stakeholders and regulators to share insights, establish best practices, and ensure compliance early on. In a rapidly evolving regulatory environment, proactive engagement with regulators is a competitive advantage — not a burden.
Third, educate the workforce and foster a data-driven culture. Empower employees through training and upskilling, enhance data literacy across the organization, and continually promote a culture that treats data as a strategic asset. AI capabilities are only as good as the human teams that design, deploy, and oversee them.
Fourth, reinvent AI governance to drive adoption. Strengthen AI governance frameworks and focus on organization-wide adoption through targeted training and support. Governance should enable AI deployment, not just constrain it — a shift that requires governance teams to understand business objectives as deeply as they understand risk requirements.
Fifth, cultivate a proactive AI risk management culture. Establish an integrated AI risk management framework that not only identifies and mitigates emerging risks but also encourages a culture of continuous learning and adaptation. Risk management in the AI era must be dynamic, forward-looking, and embedded in operations — not a periodic compliance exercise.
Preparing for the AI-Quantum Financial Services Future
The Oliver Wyman-GFTN report paints a picture of an industry at the threshold of transformative change. The data is clear: financial institutions are deploying AI at scale in risk management (78% fraud detection), operations (52% data and ML), and customer engagement (48% marketing). The shift from experimentation to value delivery is real and accelerating.
Yet significant gaps remain. Predictive AI for hyper-personalization is underutilized. Client attrition management — one of the highest-ROI applications of AI — is deployed at just 22%. Audit and compliance, where AI could dramatically reduce costs and improve accuracy, sits at only 17%. These gaps represent opportunities for institutions willing to invest where others have not.
The quantum dimension adds both urgency and possibility. While full-scale quantum computing remains years away from commercial readiness, the security implications are immediate. Financial institutions that begin post-quantum cryptography preparation now will be better positioned than those that wait for the threat to materialize. HSBC’s quantum key distribution pilot demonstrates that practical quantum applications in banking are no longer science fiction.
By embracing innovative practices, effectively managing risks, and actively exploring the potential of these technologies — coupled with pragmatic regulatory approaches — organizations can contribute to a future where AI and quantum advancements serve as catalysts for positive change. The time for exploration is ending; the time for strategic commitment is now.
Explore the full Oliver Wyman report through the interactive experience above. For additional perspective on AI’s transformative role, see our analysis of FSB monitoring of AI adoption in the financial sector.
Make your AI strategy presentations impossible to ignore — transform them into interactive experiences.
Frequently Asked Questions
How are financial services using AI and quantum technology together?
Financial institutions are deploying AI across risk management, fraud detection (78% adoption), and customer engagement, while exploring quantum technology for cryptographic security and computational acceleration. HSBC is already piloting quantum key distribution to protect FX transactions, demonstrating early convergence of these technologies.
What is the most widely deployed AI use case in financial services?
According to Oliver Wyman’s Asia Pacific Generative AI Benchmarking Survey, fraud detection leads with 78% of financial institutions deploying predictive AI for this purpose. Risk modeling follows at 57%, with KYC and verifications at 43% and credit underwriting at 39%.
What is post-quantum cryptography and why does it matter for banks?
Post-quantum cryptography refers to encryption methods designed to withstand attacks from quantum computers. Current cryptographic systems protecting banking transactions could be broken by sufficiently powerful quantum computers. Financial institutions must prepare now by understanding quantum threats and transitioning to quantum-resistant encryption standards.
What are the five calls to action for AI in financial services?
Oliver Wyman recommends: (1) Chase AI value not hype through measurable pilot projects, (2) Collaborate with industry players and regulators, (3) Educate the workforce and foster data-driven culture, (4) Reinvent AI governance to drive adoption, and (5) Cultivate a proactive AI risk management culture across the organization.
How should financial institutions build an AI operating model?
Success requires a comprehensive operating model that prioritizes solving for the customer, pragmatic prioritization, governance, effective risk management, and organizational readiness. This includes clearly defining roles for model, data, and AI governance teams, and establishing fundamental controls from the outset to build customer trust.