Agentic AI and Trust: Capgemini’s Guide to Human-AI Collaboration

📌 Key Takeaways

  • $450B by 2028: Agentic AI is projected to generate $450 billion in economic value across 14 countries, with scaled adopters gaining 2.5% of annual revenue.
  • Trust is declining: Only 27% of organizations trust fully autonomous AI agents, down from 43% just one year ago — a paradox driven by real-world experience.
  • 85% low autonomy: Despite the hype, 85% of AI agent deployments operate at basic assistance levels, with only 2% achieving full autonomy.
  • Human oversight pays off: 74% of executives say benefits of human oversight outweigh costs, and 90% view human involvement as beneficial or cost-neutral.
  • Ethics gap persists: Only 14% of organizations have fully integrated ethical AI principles, even as 48% express concern about bias and safety risks.

The Rise of Agentic AI: From Tools to Teammates

Artificial intelligence is undergoing a fundamental transformation. Where generative AI introduced the world to systems that could create text, images, and code on command, agentic AI represents the next evolutionary leap — autonomous systems capable of planning, reasoning, and executing complex multi-step tasks with minimal human guidance. Capgemini’s landmark 2025 report, surveying 1,500 executives across 14 countries and 13 industry sectors, reveals that this shift is not a distant possibility but an accelerating reality reshaping how enterprises operate.

The numbers tell a compelling story. AI agent adoption has surged approximately 3.5 times in just one year, with 14% of organizations now implementing AI agents at partial or full scale, and another 23% running active pilots. Perhaps most striking is the pace: generative AI at-scale deployments leaped from 6% in 2023 to 24% in 2024, and agentic AI appears to be following the same explosive trajectory. The inference cost for GPT-3.5-level capabilities has dropped 280-fold since November 2022, while hardware costs decline 30% annually and energy efficiency improves 40% per year.

Yet this surge in capability and adoption comes with a paradox that sits at the heart of Capgemini’s findings: as organizations gain more experience with AI agents, their trust is actually declining. Understanding why — and what to do about it — is critical for any organization seeking to harness what may be the most transformative technology of the decade. For enterprises navigating this landscape, exploring interactive AI research experiences can help teams build shared understanding of these rapidly evolving capabilities.

The $450 Billion Opportunity: Economic Potential Unlocked

Capgemini’s research projects that agentic AI will generate $450 billion in total economic value by 2028 across the 14 surveyed countries alone. If every organization achieved the anticipated benefits, the figure would balloon to $3.6 trillion. These are not abstract projections — they reflect measurable gains already being realized by early movers.

Organizations that have successfully scaled AI agent implementations report average gains of $382 million over three years, equivalent to approximately 2.5% of their annual revenue. By contrast, organizations still in exploration or pilot phases average just $76 million (0.5% of revenue) — a fivefold gap that underscores the first-mover advantage. The trajectory of collective gains across surveyed organizations is equally striking: $19 billion expected in the first 12 months, growing to $46 billion in year two and $92 billion by year three, totaling $157 billion over the three-year horizon.

Real-world case studies reinforce these projections. A US consumer health organization reported 10-12% productivity improvement, while Cox Communications achieved more than 30% improvement in structured processes. Ericsson projects 10% efficiency gains conservatively and 25% optimistically. Microsoft data suggests organizations with strong data foundations can realize 10%+ revenue uplifts through agentic AI. An overwhelming 93% of executives believe that organizations scaling AI agents within the next 12 months will secure a lasting competitive advantage, with 61% describing agentic AI’s potential as truly transformative.

The Trust Paradox: Why Confidence Is Falling

Here lies the central tension of Capgemini’s report, and arguably the defining challenge for agentic AI adoption: trust is eroding even as investment and deployment accelerate. Only 27% of organizations express trust in fully autonomous AI agents, down dramatically from 43% just twelve months earlier. Across nearly every measure of AI trust, confidence has declined — trust in AI agents to send professional emails fell from 50% to 39%, trust in data analysis dropped from 63% to 47%, and trust in customer service improvement declined from 64% to 50%.

This is not fear-based resistance from the uninformed. It is experience-based skepticism from organizations that have actually deployed AI agents and encountered their limitations firsthand. Sixty percent of organizations do not fully trust AI agents to manage tasks autonomously, and 47% believe AI agents lack emotional intelligence. The report cites the cautionary example of Klarna, which initially replaced human customer service agents with AI but subsequently had to re-hire humans after discovering that over-reliance on AI degraded service quality. Similarly, 40% of employees feel uncomfortable submitting AI-generated work, and 34% consider AI output inferior to manually produced work.

Interestingly, organizations with more deployment experience show moderately higher trust — 47% of implementing organizations report above-average trust versus 37% in the exploration phase — suggesting that sustained, well-managed exposure can gradually rebuild confidence. But the initial trust deficit is real and must be addressed systematically, not dismissed. As research on AI-driven enterprise transformation shows, building trust requires structured approaches, not just better technology.

Turn complex AI reports into engaging interactive experiences your team will actually read.

Try It Free →

Autonomy Levels: A Six-Tier Framework

One of the report’s most valuable contributions is its six-level autonomy framework for classifying AI agent deployments. This taxonomy provides essential vocabulary for organizations trying to assess where they stand and where they should aim. The levels range from Level 0 (no agent involvement) through Level 1 (AI-assisted automation), Level 2 (AI-augmented decision-making), Level 3 (AI-integrated, process-centric operations), Level 4 (independent operation with multi-agent teams), to Level 5 (fully autonomous, self-evolving systems).

The current reality is humbling. An estimated 85% of business processes are expected to operate at low autonomy levels (Level 0-2) over the next 12 months, with only 15% reaching Level 3 or above. By 2028, that higher-autonomy share is projected to grow to 25%, but only 4% of processes are expected to achieve Level 5 full autonomy within three years. In terms of day-to-day decision-making, AI agents currently handle just 6% of organizational decisions, projected to reach only 8% in one to three years.

These figures reveal a crucial insight: most of what is currently marketed as “agentic AI” in enterprise settings is essentially advanced automation or augmented assistance at Level 1-2. True agentic behavior — where AI systems independently plan, coordinate, and execute across complex processes — remains the exception rather than the rule. Organizations should calibrate their expectations accordingly, focusing on incremental autonomy gains rather than leaping to full automation.

The autonomy distribution over the next 12 months is telling: 45% of processes at Level 0, 23% at Level 1, 17% at Level 2, 9% at Level 3, 4% at Level 4, and just 2% at Level 5. Over a one-to-three-year horizon, these shift modestly: 30% at Level 0, 25% at Level 1, 21% at Level 2, 14% at Level 3, 7% at Level 4, and 4% at Level 5. The trajectory is clearly upward, but the timeline to widespread high-autonomy deployment is measured in years, not months.

Where AI Agents Are Being Deployed Today

Capgemini’s survey maps AI agent deployment across 14 organizational functions, revealing clear patterns in where enterprises are finding the most immediate value. Customer services leads with 87% of organizations deploying or planning to deploy AI agents (56% within 12 months), followed by IT at 84%, sales at 78%, and operations at 75%. Marketing sits at 69%, with product design and R&D at 62% and finance at 63%.

At the other end of the spectrum, functions requiring higher judgment, regulatory compliance, and strategic thinking show slower adoption. Legal and compliance stands at just 34%, sustainability at 28%, and corporate strategy at 41%. HR adoption is also modest at 49%, reflecting both the sensitivity of people-related decisions and the regulatory complexity surrounding automated employment processes.

The industry landscape is equally diverse. Mercedes-Benz has deployed AI agents in automotive design and customer interaction, while Rolls-Royce uses ServiceNow’s AI orchestrator for complex engineering operations. In financial services, Capital One runs a concierge AI agent for customer onboarding, and BlackRock has built its Asimov platform for investment analysis. Walmart deploys AI shopping agents, Novo Nordisk partners with NVIDIA on drug discovery agents, and Siemens is integrating AI agents into industrial automation workflows. PepsiCo leverages Salesforce’s Agentforce platform, while Cisco has partnered with Mistral AI for customer experience optimization.

Human-AI Collaboration Models for the Enterprise

Perhaps the most forward-looking section of Capgemini’s research examines how the relationship between humans and AI agents is expected to evolve. Today, the dominant model treats AI agents as subordinates (21%) or augmentation tools (41%). Within one to three years, the landscape shifts dramatically: AI agents as team members within human-supervised teams rises from 21% to 38%, agents directing work to humans grows from 9% to 21%, and agents supervising other agents increases from 4% to 12%.

This evolution represents a profound organizational transformation. It is not simply about deploying new technology — it requires rethinking team structures, decision-making authority, accountability frameworks, and management practices. The report emphasizes that 74% of executives believe the benefits of human oversight outweigh its costs, and 90% view human involvement as beneficial or cost-neutral. This is a strong endorsement of the “human-in-the-loop” approach, even as autonomy levels gradually increase.

The collaboration benefits extend beyond efficiency. Sixty-five percent of organizations expect greater engagement in high-value tasks, 53% anticipate increased creativity, 49% project greater employee satisfaction, and 37% expect lower attrition rates. Enterprises that design their human-AI collaboration models thoughtfully stand to gain not just productivity improvements but genuine organizational resilience. As organizations explore how to prepare their workforce for AI integration, collaboration design becomes the central strategic question.

See how leading organizations are transforming their documents into interactive team learning experiences.

Get Started →

Data Readiness and Infrastructure Gaps

The Capgemini report delivers a sobering assessment of organizational readiness. Fewer than 1 in 5 organizations (18%) report high data readiness maturity, and 82% report low-to-medium AI infrastructure maturity. Only 9% are fully prepared in data integration and interoperability, and just 13% demonstrate strong readiness in data monitoring and lifecycle management. These gaps represent the single largest barrier between current AI agent capabilities and their real-world effectiveness.

AI infrastructure maturity across key dimensions tells a similar story. In computing capacity, only 22% of organizations rate themselves as highly mature. Integration readiness stands at 16%, orchestration capability at 18%, fine-tuning capacity at a concerning 14%, and cybersecurity readiness at 20%. These numbers indicate that even well-funded enterprises are struggling with the foundational requirements for effective AI agent deployment.

The data readiness challenge is multifaceted. Organizations need clean, well-structured, and accessible data for AI agents to function effectively. They need robust data pipelines, real-time integration capabilities, proper governance frameworks, and security architectures that can accommodate autonomous AI decision-making. Many organizations have accumulated years of technical debt in their data infrastructure, and the demands of agentic AI are exposing these weaknesses at scale. Only 16% of organizations have formalized a strategy and roadmap for AI agent implementation, while 39% report having multiple initiatives but no overarching strategy — a fragmentation that compounds infrastructure challenges.

Ethics, Governance, and Responsible AI Agents

The governance gap is arguably the most urgent finding in Capgemini’s research. While awareness of AI risks is high — 51% of executives cite privacy concerns, 48% worry about safety risks, 48% flag unwanted bias, and 46% point to lack of transparency — active mitigation lags dramatically behind concern. Only 34% are actively mitigating privacy risks, 29% addressing safety, 28% tackling bias, and 24% working on transparency. The gap between acknowledgment and action is consistently 15-25 percentage points.

The ethical AI maturity landscape reinforces this concern. Only 14% of organizations have fully integrated ethical AI principles, while 18% have no formal measures whatsoever (nascent stage). Thirty-six percent show inconsistent adoption (emerging), and 33% demonstrate partial adoption (developed). Given that 68% of executives worried about bias in generative AI in 2024 (up from 36% in 2023), and 48% express concern about the ethical implications of deploying AI agents specifically, the governance infrastructure is clearly not keeping pace with deployment velocity.

Capgemini recommends several structural approaches: embedding ethical reasoning directly into AI agent design rather than applying it as an afterthought, making agent decision-making traceable and auditable, creating layered governance through “guardian agents” that monitor other agents’ behavior, appointing dedicated AI ethicists, and establishing continuous feedback loops. The concept of guardian agents — AI systems specifically designed to oversee and constrain other AI agents — is particularly noteworthy, representing a form of AI-powered governance that scales with deployment. Regulatory frameworks like the EU AI Act are also shaping how organizations approach responsible deployment.

Workforce Transformation and New Roles

The human dimension of agentic AI adoption receives extensive attention in Capgemini’s report, and the findings paint a complex picture. On one side, 61% of organizations report rising employee anxiety about AI agents’ impact on jobs, and 52% believe AI agents will displace more jobs than they create. On the other, 70% believe AI agents will necessitate organizational restructuring, 68% note employees could redirect freed capacity to higher-value tasks, and 59% see the possibility of creating entirely new roles such as AI agent supervisors and behavior analysts.

The skills required for the agentic AI era span both technical and interpersonal domains. On the hard skills side, data management leads at 59%, followed by programming and software development at 53% and troubleshooting and debugging AI systems at 50%. Soft skills are equally critical: decision-making (52%), collaboration and teamwork (48%), and logical reasoning (43%). This dual requirement suggests that the future workforce will need to be both technically literate and highly skilled in the uniquely human capabilities that AI agents cannot replicate.

Organizations are also grappling with a significant knowledge deficit. Only 53% of leaders claim sufficient understanding of AI agent capabilities, just 39% clearly understand the differences between AI, generative AI, and agentic AI, and only 28% are confident they can extract the full potential of AI agents. This knowledge gap exists at the leadership level — the very people making deployment and investment decisions. Closing it is essential, not just through training programs but through hands-on experience with well-structured pilot deployments. Research from McKinsey’s State of AI consistently shows that leadership AI literacy correlates directly with successful adoption outcomes.

Building Trust: A Roadmap for Agentic AI Adoption

Capgemini’s report culminates in a clear set of recommendations for organizations seeking to build sustainable trust in their agentic AI deployments. The top factors that executives identify as trust-builders are illuminating: demonstrated accuracy and reliability leads at 52%, followed by explanation and transparency at 45%, robust security, governance, and compliance at 42%, human oversight with the ability to intervene at 36%, and the ability to measure AI impact at 32%.

The practical roadmap emerges from these findings. First, organizations should start with process redesign, not technology deployment. Mapping existing workflows, identifying where AI agents can add value at appropriate autonomy levels, and designing new processes around human-AI collaboration is more effective than retrofitting AI into existing structures. Second, organizations need a structured framework for AI mix selection, categorizing decisions by risk level, reversibility, ethical sensitivity, creativity requirements, breadth of impact, and compliance obligations.

Third, data foundations must be strengthened before scaling. The 82% of organizations with low-to-medium AI infrastructure maturity cannot simply deploy their way to readiness. Investment in data quality, integration, and governance is a prerequisite, not a parallel workstream. Fourth, organizations should design for interoperability and orchestration from the outset, ensuring that AI agents can communicate effectively with each other and with existing enterprise systems.

Fifth, the scope should extend to business model innovation. Agentic AI is not just an efficiency tool — it can enable entirely new value creation models, pricing structures (55% prefer consumption-based pricing), and customer engagement approaches. Finally, organizations should establish new performance metrics for hybrid teams that reflect the unique dynamics of human-AI collaboration rather than simply measuring AI agents against purely human benchmarks. According to Stanford HAI’s research, organizations that develop collaboration-specific metrics see significantly better outcomes than those using traditional productivity measures.

Capgemini’s recommendation framework also emphasizes the critical importance of defining “autonomy boundaries” within digital business architecture — clear, documented guidelines about what AI agents can and cannot do, with graduated escalation procedures for edge cases. Risk mitigation strategies most commonly cited include human oversight at critical decision points (69%), kill switches (48%), human-centric agent design (36%), and building reporting and feedback systems (65%). As the NIST AI Risk Management Framework emphasizes, structured governance and continuous monitoring form the foundation of trustworthy AI deployment.

Transform this Capgemini report into an interactive experience your leadership team can explore.

Start Now →

Frequently Asked Questions

What is agentic AI and how does it differ from traditional AI?

Agentic AI refers to AI systems that can autonomously plan, reason, and execute multi-step tasks with minimal human intervention. Unlike traditional AI that responds to single prompts, agentic AI operates across autonomy levels from assisted (Level 1) to fully autonomous (Level 5), making independent decisions and coordinating with other AI agents or human team members.

Why is trust in AI agents declining despite growing adoption?

According to Capgemini’s 2025 research, trust in fully autonomous AI agents dropped from 43% to 27% in just one year. This paradox stems from experience-based skepticism: as organizations deploy AI agents, they encounter real limitations including hallucinations, lack of emotional intelligence, and unpredictable behavior, leading to more informed but lower trust levels.

How much economic value could agentic AI generate by 2028?

Capgemini projects agentic AI will generate $450 billion in economic value by 2028 across 14 surveyed countries. Organizations that have already scaled AI agents report average gains of $382 million over three years (2.5% of annual revenue), compared to $76 million for others still in early stages.

What are the main risks of deploying AI agents in enterprises?

The top risks include privacy concerns (51% of executives), safety risks (48%), unwanted bias (48%), lack of transparency (46%), and skill degradation among human workers (43%). Critically, only 14% of organizations have fully integrated ethical AI principles, leaving significant governance gaps as deployment scales.

How should organizations structure human-AI collaboration?

Capgemini recommends a graduated approach: start with AI agents as augmentation tools (Level 1-2), progress to integrated team members under human supervision (Level 3), and reserve full autonomy (Level 4-5) for well-tested, low-risk processes. 74% of executives believe the benefits of human oversight outweigh its costs, and 90% view human involvement as beneficial or cost-neutral.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup