AI Washing in Investment | Signs, Risks and Solutions
Table of Contents
- What Is AI Washing and Why It Matters for Investors
- The Surge of AI Adoption Claims in Financial Services
- Why Investment Firms Engage in AI Washing
- The Jenga Problem — AI Risks for Existing Investment Processes
- Fundamental vs. Quantitative Managers and AI Washing
- How to Detect AI Washing — The Personnel Check
- AI Washing Due Diligence — The CFA Institute Framework
- AI Washing vs. Explainable AI and Transparency
- Regulatory Implications of AI Washing in Finance
- Eliminating AI Washing for Better Investment Outcomes
📌 Key Takeaways
- AI Washing Defined: Companies falsely or inaccurately claim to leverage AI in their investment processes, using buzzwords and marketing to exaggerate capabilities beyond their actual implementation.
- 52% Claim GenAI Use: Financial services firms reporting generative AI usage jumped from 40% in 2023 to 52% in 2025, intensifying the pressure that drives AI washing behavior.
- The Personnel Test: The most reliable way to assess AI claims is investigating whether department leadership has genuine expertise in data science and machine learning — not just quantitative backgrounds.
- The Jenga Problem: Firms fear that incorporating AI into existing processes may disrupt proven systems, making them reluctant to genuinely adopt AI even while marketing AI capabilities.
- 10-Question Framework: The CFA Institute provides a structured due diligence questionnaire covering algorithm specifics, performance comparisons, data sources, robustness testing, and governance structures.
What Is AI Washing and Why It Matters for Investors
The CFA Institute’s June 2025 research report by Joseph Simonian introduces a phenomenon that is reshaping the landscape of investment management due diligence: AI washing. Defined as the practice where companies, organizations, and individuals falsely or inaccurately claim to leverage artificial intelligence technologies to enhance their investment processes, AI washing encompasses the use of buzzwords and marketing strategies that exaggerate the true capabilities or presence of AI in business activities. The result is client and stakeholder confusion, growing skepticism toward genuine AI innovators, and potential ethical concerns that undermine trust in the investment management industry.
The report draws an important distinction between what it terms “strong AI” — a theoretical replication of generalized human intelligence including emotions, common sense reasoning, and contextual understanding — and “weak AI,” the computational and statistical tools that actually exist today. Weak AI encompasses supervised and unsupervised machine learning, reinforcement learning, natural language processing, and generative AI. When the report discusses AI washing, it focuses on firms that overstate their use of these practical tools, not companies claiming to have achieved artificial general intelligence. This distinction matters because it sets a realistic bar for what genuine AI adoption in investment management actually looks like.
Consider the difference between genuine and misleading AI claims. A portfolio management team that has built a machine learning model taking data feeds from a firm’s database, training on the data, learning meaningful patterns, and producing buy and sell trades for specific securities — supported by measurable investment and business improvements — represents legitimate AI adoption. By contrast, a team whose investment process is driven primarily by qualitative fundamentals but also uses various large language models to inform some decisions would be engaging in AI washing if it described its process as “AI driven.” Even though the LLM use may be genuinely additive, it does not drive the process, making the characterization misleading. For a broader perspective on how AI is transforming investment practices, see our interactive analysis of AI in investment management.
The Surge of AI Adoption Claims in Financial Services
Data from NVIDIA’s “State of AI in Financial Services: 2025 Trends” report, surveying 600 global financial services professionals, reveals the scale of AI adoption claims that creates fertile ground for AI washing. A full 57% of respondents reported they are using or considering using AI for data analytics, while 52% claimed to use generative AI — up dramatically from 40% in 2023, representing a 12 percentage point jump in just two years. These figures suggest an industry rapidly embracing AI technologies, but the CFA Institute report raises a critical question: how many of these claims reflect substantive implementation versus superficial adoption motivated by commercial pressure?
The usage figures become even more striking in specific domains. Some 38% of respondents stated they use AI for trading and portfolio optimization, a remarkable increase from just 15% in 2023. Similarly, 32% reported using AI for pricing, risk management, and underwriting, up from 13% in 2023. Meanwhile, 37% believe AI has created operational efficiencies and 32% believe it has created a competitive advantage for their firms. While these numbers suggest genuine progress, the CFA Institute’s research suggests that the gap between claimed and actual AI capabilities may be significant in many cases, particularly when commercial incentives outweigh genuine technological readiness.
The author makes a carefully calibrated assessment: AI washing is “likely not widespread at present” given the current state of adoption, and because of its “inherently subjective nature, it is almost impossible to quantify.” Few academic or industry studies have explored the issue systematically. However, the rapid acceleration of AI adoption claims — combined with the competitive dynamics of the investment industry — suggests that the risk of AI washing is growing in proportion to the perceived commercial importance of appearing AI-sophisticated.
Why Investment Firms Engage in AI Washing
The CFA Institute report identifies a fundamental tension at the heart of AI washing in investment management. Firms are commercially induced to develop AI applications — demonstrating adoption of cutting-edge technologies presumably increases the chances of attracting new business and retaining existing clients. This commercial pressure is amplified by competitive dynamics: asset managers are deeply reluctant to give any impression of inferiority in technological and quantitative tool adoption. Appearing to lag behind competitors in AI implementation is perceived as a “cardinal sin,” especially for quantitatively oriented firms.
However, many firms are unwilling or unable to procure the necessary talent and technology because any serious AI effort requires considerable time and resources. Genuine AI implementation demands significant technology spending for software and hardware, hiring the right people with specialized skills, and investing in time-consuming processes for program design, implementation, and maintenance. Simply allocating budget to technology is not enough without proper human capital — and building an appropriate AI team and infrastructure takes enormous effort. The result of this tension is predictable: commercial reasons can induce firms to overstate the degree to which they actually use AI tools, finding it more efficient to project AI sophistication than to build it.
The competitive fear dynamic creates a self-reinforcing cycle. As some firms make genuine advances in machine learning and AI capabilities, others feel the pressure to match those claims — regardless of whether they have made comparable investments. This fear of falling behind is particularly acute because the investment industry’s marketing ecosystem rewards technological narratives. Fund selectors, consultants, and allocators increasingly include AI adoption as part of their evaluation criteria, making the commercial cost of appearing AI-deficient tangible and immediate. The CFA Institute report argues that financial products deserve the same truth-in-advertising standards as computers and pharmaceuticals, noting that “financial products are just as much products” and transparency should apply regardless of product type.
Turn complex investment research into interactive experiences your clients will actually engage with.
The Jenga Problem — AI Risks for Existing Investment Processes
One of the most compelling frameworks in the CFA Institute report is what it calls the “Jenga problem.” Just as removing a block in the game of Jenga risks collapsing the entire tower, asset managers fear that incorporating AI components into their existing investment processes might disrupt proven systems more than help them. Most mature quantitative firms have developed and fairly intricate investment processes already in place — processes that have been refined over years or decades and that generate commercially successful products. Making substantive changes by replacing traditional statistical elements with AI-driven components risks producing unfavorable investment outcomes if done hastily or without sufficient understanding of how the new components interact with existing ones.
This risk calculus creates a powerful incentive structure for AI washing. If a firm’s existing investment process is generating acceptable returns and attracting client capital, the rational calculation may favor maintaining that process while adding an AI marketing narrative rather than genuinely integrating AI and risking disruption. The Jenga problem is particularly acute for quantitative firms, which already employ sophisticated mathematical and statistical models. For these firms, AI is not a revolution from zero — it is an incremental advancement over existing quantitative methods, and the marginal benefit of genuine AI adoption must be weighed against the real risk of destabilizing a working system.
The report identifies this dynamic as one of the major reasons AI washing is a risk specifically in asset management. Unlike technology companies where AI is often a core product capability, investment firms can maintain client relationships and generate returns without AI, making the cost-benefit calculation of genuine adoption less straightforward. Firms may already sell commercially successful products that do not use AI, creating reluctance to change what is working. The temptation to claim AI enhancement while preserving the existing process intact is, from a purely commercial perspective, the path of least resistance.
Fundamental vs. Quantitative Managers and AI Washing
The CFA Institute research draws an important distinction between how AI washing manifests differently across investment styles. Quantitative managers — firms whose processes are already built on mathematical and statistical foundations — face the Jenga problem most acutely. They have intricate existing processes that work, and the risk of disrupting them with AI components is tangible. These firms may engage in AI washing because competitors are advancing faster in genuine AI adoption, and the fear of appearing technologically inferior is especially acute in a domain where quantitative sophistication is the core value proposition.
Fundamental managers face a different challenge. AI and machine learning are trendy topics that clients want to discuss, creating pressure to demonstrate AI literacy and adoption during due diligence conversations and marketing presentations. However, incorporating AI substantively while keeping decision-making fundamentally in human hands — the essence of fundamental investing — is genuinely challenging. How does a portfolio manager who selects stocks based on deep company analysis, management assessment, and competitive dynamics meaningfully integrate machine learning without undermining the discretionary judgment that defines their approach? This dilemma tempts fundamental managers to exaggerate their AI use, claiming it drives or significantly enhances their process when it may serve only a peripheral role.
Interestingly, the report notes a paradoxical dynamic for some managers, particularly those with qualitative or discretionary approaches. Rather than fearing they appear insufficiently AI-driven, some managers worry that appearing overly reliant on AI will undermine investor confidence in the manager’s ability to add independent value. If the market believes AI is commoditized — available to anyone with a subscription — then claiming heavy AI reliance may actually diminish a manager’s differentiation story. This creates an unusual situation where AI washing works in both directions: some firms overstate AI use to appear cutting-edge, while others understate it to preserve their narrative of unique human insight.
How to Detect AI Washing — The Personnel Check
The CFA Institute report provides practical guidance for investors seeking to evaluate AI claims, beginning with what it identifies as the easiest and most revealing initial test: the personnel check. Before asking any technical questions about algorithms or data, investors should investigate the people working on a firm’s AI projects, with particular attention to leadership. The principle is straightforward: if a firm’s head of data science or AI is someone who has worked at the company for a long period but has scant experience and education in artificial intelligence, data science, or machine learning, that is a significant red flag indicating potential AI washing.
The report emphasizes that having “quants” on staff is insufficient grounds for accepting AI claims. Many quantitative professionals are educated in mathematics, physics, or engineering — disciplines that provide strong analytical foundations but do not necessarily include training in modern machine learning methods, deep learning architectures, or the practical challenges of deploying AI systems. Leadership of AI initiatives must be able to evaluate what the department produces, which requires genuine expertise in the specific methods being claimed. The report draws a comparison to the technology sector, where department leaders almost always have extensive relevant technical expertise — investment firms, the author argues, should be held to similar standards.
Beyond formal credentials, the personnel check should assess whether a firm has made the organizational investments that genuine AI adoption requires. Does the firm have dedicated data engineers who build and maintain data pipelines? Are there ML engineers who handle model deployment and monitoring? Is there a model governance function that reviews AI outputs before they influence investment decisions? The absence of these specialized roles — distinct from traditional quant researchers — suggests that a firm’s AI capabilities may be more aspirational than operational, regardless of what its marketing materials claim. For insights into how organizations are building genuine AI capabilities, explore our interactive guide to building AI teams in financial services.
Make AI due diligence engaging — transform research reports into interactive experiences your team retains.
AI Washing Due Diligence — The CFA Institute Framework
The centerpiece of the CFA Institute’s practical guidance is a structured due diligence questionnaire designed to systematically evaluate AI claims. The ten-question framework covers the full spectrum of genuine AI implementation, from algorithm selection to governance structures, and is designed to quickly expose superficial claims that cannot withstand technical scrutiny.
The first questions focus on specifics: what type of algorithm or combination is the firm using, and how does it enhance forecasting? Genuine practitioners can describe their model architectures, explain why they chose specific approaches over alternatives, and articulate the theoretical basis for expecting those methods to work in their investment context. The framework then asks for quantitative performance comparisons — how does the AI model outperform simpler models on specific metrics? Firms engaged in genuine AI implementation will have backtests, out-of-sample results, and comparisons against benchmarks and equal-weighted portfolios. Those engaged in AI washing will struggle to provide these.
The questionnaire probes data sources and preprocessing, asking what data trains the models, whether alternative data like satellite imagery or sentiment analysis is incorporated, and how missing data, outliers, and limited datasets are handled. It addresses interpretability — how does the firm maximize model explainability, whether through model choice or post-implementation communications? Critical questions about robustness ask how firms guard against overfitting, tune hyperparameters, monitor model drift, and implement retraining mechanisms. Finally, governance questions explore internal AI audit processes, review frequency, and compliance structures. For outsourced AI technology, additional questions assess quality assurance processes. The report acknowledges that firms will invoke the “secret sauce” defense to avoid revealing proprietary details, but argues thorough interrogation remains necessary to distinguish genuine from superficial AI use.
AI Washing vs. Explainable AI and Transparency
The CFA Institute report positions AI washing as standing in “almost direct opposition” to the principles of Explainable AI (XAI), a growing movement within the AI research and governance communities. XAI aims to provide users with maximum transparency and control over AI systems, focusing on making opaque algorithms — particularly deep learning models — more accessible and understandable to human users and overseers. The fundamental premise of XAI is that users should be able to understand why an AI system produces specific outputs, enabling informed evaluation and appropriate trust calibration.
AI washing undermines this premise from the opposite direction. Rather than making genuine AI systems more transparent, AI washing creates false impressions about systems that may not meaningfully exist. When a firm misleads users about its AI-driven products, it makes those products harder to understand — not because the underlying AI is complex, but because the gap between marketing claims and reality introduces a layer of confusion that no amount of explainability research can address. The problem is not that the AI is unexplainable; it is that there may be far less AI to explain than claimed.
The report argues that minimizing AI washing directly supports the goals of XAI by ensuring that transparency efforts focus on genuine AI systems rather than marketing constructs. When investors can trust that a firm’s AI claims are accurate, the conversation can shift productively toward understanding how those systems work, what their limitations are, and how they contribute to investment decisions. This alignment between anti-AI-washing efforts and XAI principles suggests that regulators and industry standards bodies should treat AI washing and AI transparency as complementary policy objectives rather than separate concerns. The CFA Institute’s Code of Ethics provides a natural framework for integrating these objectives through its requirements for truthful communication and duty to clients.
Regulatory Implications of AI Washing in Finance
AI washing is becoming subject to heightened scrutiny from both the investment community and regulatory authorities. The report references the EU Artificial Intelligence Act (Regulation [EU] 2024/1689), which provides definitional frameworks for AI systems and establishes regulatory obligations based on risk levels. While the EU AI Act does not specifically address AI washing in investment marketing, its comprehensive definition of what constitutes an AI system — “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment” — provides a benchmark against which marketing claims can be evaluated.
The CFA Institute report anchors its ethical analysis in the organization’s own Code of Ethics and Standards of Professional Conduct. The author argues that any asset manager or asset owner must be able to provide sufficient detail regarding why and how they implemented AI technology, what specific frameworks they used, and what results or improvements they observed. This standard aligns with the Code’s principles of transparency and duty to clients — misleading clients about the technological basis of an investment process is no different from misleading them about any other material aspect of how their money is managed.
The regulatory implications extend beyond disclosure requirements. As AI adoption claims become material to investment selection decisions — influencing allocators, consultants, and end investors in their choice of managers — inaccurate AI claims could constitute a form of material misrepresentation. Regulators in multiple jurisdictions are beginning to scrutinize ESG-related greenwashing claims; AI washing may follow a similar trajectory as AI adoption becomes an increasingly important factor in institutional investment decisions. The report suggests that the investment industry should proactively address AI washing through self-regulation and enhanced due diligence standards rather than waiting for regulatory intervention.
Eliminating AI Washing for Better Investment Outcomes
The CFA Institute report concludes with a clear message: understanding the motivations behind AI washing and learning to recognize its telltale signs enables stakeholders to minimize and eventually eliminate the phenomenon, leading to better investment outcomes for all participants. The report does not advocate that investors must use AI — rather, it insists that firms claiming to use AI must be truthful about how, to what extent, and with what results. This distinction is crucial: the goal is not universal AI adoption but universal honesty about AI adoption.
A critical recommendation targets asset owners themselves: they must develop some minimal competence in AI methodologies to effectively evaluate managers’ claims. The technical nature of artificial intelligence demands it — investors who lack even basic understanding of machine learning concepts, model validation approaches, and data quality requirements cannot meaningfully assess whether a manager’s AI claims are substantive or superficial. This competence requirement extends to consultants, fund selectors, and other gatekeepers who influence capital allocation decisions in the institutional investment ecosystem.
The report’s final framing is both practical and principled. Firms selling financial products should conform to the same standards of transparency demanded for other products in any industry. A pharmaceutical company cannot market a drug with unsubstantiated efficacy claims; a technology company cannot sell software with fabricated performance benchmarks. Investment firms claiming AI-driven processes should face equivalent scrutiny and accountability. The unique challenges of applying AI to investment management — more limited data, higher volatility, fewer observations, and the inability to conduct controlled experiments — make genuine AI adoption genuinely difficult. That difficulty is precisely what makes truthful communication about AI capabilities so important, and AI washing so harmful to the investment ecosystem. Discover how leading asset managers are navigating these challenges in our interactive analysis of AI transparency in asset management.
Empower your due diligence team — transform research into interactive experiences that drive better decisions.
Frequently Asked Questions
What is AI washing in the investment industry?
AI washing is the practice where companies, organizations, and individuals falsely or inaccurately claim to leverage AI technologies to enhance their investment processes. It includes using buzzwords and marketing strategies that exaggerate the true capabilities or presence of AI in business activities, leading to investor confusion and potential ethical concerns.
How can investors detect AI washing by asset managers?
The CFA Institute recommends starting with a personnel check — investigating whether leadership of AI departments has genuine expertise in data science and machine learning. Key red flags include leaders with scant AI education, inability to specify algorithms used, lack of quantitative performance comparisons, and no concrete examples of AI-influenced investment decisions. A 10-question due diligence template helps evaluate claims systematically.
Why do investment firms engage in AI washing?
Firms face commercial pressure to appear technologically sophisticated, fear falling behind competitors, and recognize that clients increasingly expect AI adoption. However, genuine AI implementation requires considerable time, resources, and specialized talent that many firms lack. This tension between commercial incentives and implementation challenges creates conditions where overstating AI capabilities seems more efficient than building genuine capacity.
What are the regulatory implications of AI washing?
AI washing is subject to heightened regulatory scrutiny. The EU AI Act provides definitional frameworks for AI systems, while financial regulators expect transparency and truth in advertising for financial products. The CFA Institute’s Code of Ethics requires sufficient detail about AI implementation, including specific frameworks used and measurable results observed, aligning with principles of transparency and duty to clients.
How widespread is AI washing in finance?
According to the CFA Institute research, AI washing is likely not widespread at present given the current state of AI adoption in investment management. However, its inherently subjective nature makes it almost impossible to quantify precisely. With 52% of financial services firms now claiming to use generative AI (up from 40% in 2023), the potential for exaggerated claims is growing as competitive pressure intensifies.
What is the Jenga problem in AI adoption for investment firms?
The Jenga problem refers to the risk that incorporating AI components into existing investment processes might disrupt them, similar to how removing a block in Jenga can collapse the entire tower. Asset managers with established, commercially successful processes fear that replacing traditional elements with AI-driven components could produce unfavorable investment outcomes, making them reluctant to genuinely adopt AI even while marketing AI capabilities.