CNCF Cloud Native Survey 2025: Kubernetes Becomes the AI Operating System with 82% Production Adoption

📌 Key Takeaways

  • 82% Production Adoption: Kubernetes production usage surged from 66% in 2023 to 82% in 2025, confirming its status as the enterprise infrastructure standard.
  • Kubernetes Is the AI OS: 66% of organizations hosting generative AI models use Kubernetes for inference workloads, establishing it as the operating system for AI.
  • Near-Universal Cloud Native: 98% of organizations have adopted cloud native techniques, with 59% reporting most of their development is now cloud native.
  • Culture Is the New Barrier: Cultural challenges (47%) have overtaken technical complexity as the primary obstacle to cloud native adoption for the first time.
  • GitOps Defines Leaders: 58% of cloud native innovators use GitOps extensively versus only 23% of adopters, marking it as a key maturity differentiator.

Why the CNCF Cloud Native Survey 2025 Matters for AI Infrastructure

The CNCF Annual Cloud Native Survey 2025 represents the most comprehensive snapshot of cloud native technology adoption across the global enterprise landscape. Published by the Cloud Native Computing Foundation (CNCF), which operates under the Linux Foundation, this annual survey draws on community-sourced data from organizations spanning every major industry vertical and geographic region. The 2025 edition carries particular weight because it documents a pivotal inflection point: the moment when Kubernetes transitioned from a container orchestration tool into what the CNCF now describes as the “operating system” for artificial intelligence.

Understanding the implications of this survey is essential for technology leaders, platform engineers, DevOps teams, and anyone involved in building or scaling AI-powered applications. The data reveals not just where the industry stands today, but the trajectories that will define cloud native strategy for years to come. With 82% of container users now running Kubernetes in production — up from 66% just two years earlier — the survey captures a technology ecosystem that has fundamentally shifted from experimental adoption to production-grade maturity.

For organizations evaluating their infrastructure investments, the CNCF survey provides empirical evidence that Kubernetes has become the common denominator for scale, stability, and innovation. This is especially relevant as enterprises race to bring AI and machine learning workloads from prototype to production. The convergence of cloud native practices and AI infrastructure represents a new chapter in how organizations build, deploy, and manage intelligent systems at scale. As Jonathan Bryce, executive director of CNCF, noted: “Kubernetes isn’t just scaling applications; it’s becoming the platform for intelligent systems.”

The survey findings also underscore a broader industry trend that platforms like Libertify’s Interactive Library help professionals navigate — the shift from static technical documentation to interactive, engaging knowledge experiences that make complex research accessible and actionable.

Kubernetes Production Adoption Reaches 82% — A New Cloud Native Milestone

The headline statistic from the CNCF Cloud Native Survey 2025 is striking: 82% of container users now run Kubernetes in production environments. This represents a 16-percentage-point increase from the 66% recorded in the 2023 survey, marking one of the most significant growth trajectories in enterprise infrastructure history. To appreciate the magnitude of this shift, consider that Kubernetes went from a Google-internal project called Borg to the dominant production orchestration platform in barely a decade.

This level of production adoption signals that Kubernetes has decisively moved beyond the “early adopter” phase described in technology diffusion models. Organizations are no longer experimenting with Kubernetes in staging environments or running pilot projects — they are relying on it as the backbone of their production infrastructure. The 82% figure effectively means that for every five organizations using containers, more than four have committed to Kubernetes as their production orchestration layer.

The growth from 66% to 82% in just two years is remarkable when placed in context. Enterprise infrastructure transitions typically take five to seven years to move from early majority to late majority adoption. Kubernetes has compressed this timeline significantly, driven by several factors: the maturation of managed Kubernetes services from major cloud providers (Amazon EKS, Google GKE, Azure AKS), the standardization of the Kubernetes API as a universal control plane, and the growing ecosystem of CNCF-graduated projects that extend Kubernetes capabilities.

For enterprise architects and CIOs, the 82% production adoption rate eliminates much of the remaining risk associated with Kubernetes investments. When four out of five container-using organizations run Kubernetes in production, the talent pool, tooling ecosystem, and best practices library have all reached critical mass. The question is no longer “should we adopt Kubernetes?” but “how do we optimize our Kubernetes strategy for AI workloads and beyond?”

How Kubernetes Became the De Facto Platform for AI Workloads

The CNCF survey 2025 reveals a convergence that many industry observers predicted but few expected to materialize so quickly: Kubernetes has emerged as the preferred infrastructure platform for running artificial intelligence workloads in production. The survey explicitly positions Kubernetes as the “operating system for AI,” a designation supported by the data showing 66% of organizations hosting generative AI models use Kubernetes to manage some or all of their inference workloads.

This convergence between Kubernetes and AI is driven by practical requirements. AI inference workloads — the process of running trained models to generate predictions or content — demand elastic scaling, GPU resource management, low-latency networking, and automated failover. These are precisely the capabilities that Kubernetes was designed to provide. The Kubernetes scheduler, combined with device plugins for GPUs and specialized operators for machine learning frameworks like PyTorch and TensorFlow, creates a unified platform that abstracts the complexity of heterogeneous AI infrastructure.

Several open-source projects within the CNCF ecosystem have accelerated this convergence. KubeFlow provides machine learning pipelines on Kubernetes, NVIDIA’s GPU Operator automates GPU management, and projects like KServe standardize model serving interfaces. Together, these tools transform Kubernetes from a general-purpose container orchestrator into a purpose-built AI platform that handles everything from model training data pipelines to production inference endpoints.

The significance of Kubernetes as the AI operating system extends beyond technical capabilities. It means that organizations can leverage their existing Kubernetes expertise, tooling, and operational practices for AI workloads without building parallel infrastructure stacks. Platform engineering teams that have invested years in Kubernetes operational maturity can now extend that investment to AI, reducing the time and cost of bringing machine learning models into production. This is a critical advantage in an era where AI deployment speed directly impacts competitive positioning.

Turn complex cloud native reports into interactive experiences your team will actually explore.

Try Libertify Free →

Cloud Native Maturity: 98% of Organizations Embrace Modern Infrastructure

Perhaps the most definitive indicator of cloud native’s mainstream status is the survey finding that 98% of organizations have adopted cloud native techniques. This near-universal adoption rate demonstrates that cloud native is no longer a technology choice — it is the technology standard. Only 10% of organizations report being in early stages or not using cloud native at all, meaning the vast majority have progressed well beyond initial experimentation.

The depth of adoption is equally telling. According to the survey, 59% of organizations report that “much” or “nearly all” of their development and deployment is now cloud native. This means that for a majority of enterprises, cloud native practices are not confined to a single team or greenfield project — they permeate the organization’s software delivery lifecycle. Microservices architectures, containerized deployments, infrastructure as code, and CI/CD pipelines have become the default approach rather than the exception.

This level of maturity has significant implications for the technology industry. With 98% adoption, the cloud native ecosystem has achieved the network effects that drive self-reinforcing growth. More adoption means more contributors to open-source projects, more training materials, more consultants, more job candidates with relevant skills, and more vendor support. The ecosystem becomes increasingly difficult to displace precisely because it has reached this level of ubiquity.

For organizations in the remaining 2% that have not yet adopted cloud native practices, the survey data presents a clear signal: the industry has made its choice. Delaying cloud native adoption now means falling behind not just in infrastructure modernization, but in the ability to leverage the AI capabilities that cloud native platforms enable. The convergence of cloud native and AI means that infrastructure decisions made today will directly impact an organization’s AI readiness tomorrow.

Understanding how different industries have approached this transition is crucial for benchmarking. Financial services, healthcare, and telecommunications have shown particularly strong cloud native maturity, driven by regulatory requirements for scalability and resilience. The growing body of enterprise AI adoption research available in interactive formats helps decision-makers compare their progress against industry peers.

AI Inference on Kubernetes: 66% of Generative AI Deployments Use Containers

The CNCF survey provides granular data on how organizations are specifically deploying AI workloads on Kubernetes. The headline figure — 66% of organizations hosting generative AI models use Kubernetes for some or all inference — reveals that Kubernetes has captured the majority of the AI infrastructure market before many specialized AI platforms could establish themselves.

However, the survey also reveals important nuances about AI deployment maturity. While infrastructure readiness is high, deployment frequency remains cautious: only 7% of organizations deploy AI models daily, while 47% deploy occasionally. This gap between infrastructure capability and deployment cadence suggests that organizations are still developing the operational practices, testing frameworks, and governance structures needed for continuous AI deployment. The infrastructure is ready; the organizational processes are catching up.

Notably, 44% of respondents report that they do not yet run AI or ML workloads on Kubernetes at all. This represents a significant untapped market and growth opportunity for the Kubernetes-AI ecosystem. These organizations may be running AI workloads on alternative platforms, or they may be in the early stages of AI exploration. Either way, the trajectory is clear: as more organizations move from AI experimentation to production deployment, Kubernetes will be the natural destination given its dominant position in production infrastructure.

The inference workload focus is particularly significant because inference — running trained models to generate predictions or content — accounts for the vast majority of AI compute costs in production. Training a model is a one-time (or periodic) expense, but serving that model to millions of users requires continuous, scalable infrastructure. Kubernetes’ ability to auto-scale inference endpoints based on demand, manage GPU resources efficiently, and ensure high availability makes it the logical choice for production AI serving.

Organizations looking to accelerate their AI-on-Kubernetes strategy can benefit from studying documented patterns and architectures. The Kubernetes documentation provides foundational concepts, while specialized resources on AI infrastructure patterns help bridge the gap between general Kubernetes knowledge and AI-specific deployment practices.

GitOps and Platform Engineering Drive Cloud Native Innovation

One of the most actionable insights from the CNCF Cloud Native Survey 2025 is the strong correlation between GitOps adoption and cloud native maturity. The survey identifies a clear division between “cloud native innovators” — organizations at the cutting edge of adoption — and “adopters” still progressing along the maturity curve. The differentiating factor? GitOps. A commanding 58% of innovators use GitOps principles extensively, compared to only 23% of adopters.

GitOps represents a paradigm shift in infrastructure management where the desired state of the entire system is declared in Git repositories. Changes to infrastructure and application configurations are made through pull requests, reviewed by peers, and automatically reconciled by tools like Argo CD and Flux. This approach brings the same rigor, auditability, and collaboration to infrastructure that software engineering has long applied to application code.

The platform engineering movement, closely related to GitOps, is another key trend highlighted in the survey. The Backstage project — an open-source framework for building internal developer portals — ranks as the fifth-highest-velocity CNCF project. This ranking reflects the growing recognition that cloud native success depends not just on the right infrastructure, but on providing developers with self-service platforms that abstract complexity while maintaining operational guardrails.

Platform engineering teams are building “golden paths” — opinionated, pre-configured workflows that guide developers through common tasks like deploying a new microservice, provisioning a database, or setting up a monitoring dashboard. These golden paths encode organizational best practices into automated workflows, reducing cognitive load on developers while ensuring consistency and compliance. The combination of GitOps for state management and platform engineering for developer experience represents the operational blueprint for mature cloud native organizations.

For technology leaders, the message is clear: investing in GitOps and platform engineering is not just an operational improvement — it is a strategic differentiator that separates innovators from the rest of the field. Organizations that treat infrastructure as code and provide developer-friendly platforms will move faster, deploy more reliably, and adapt more quickly to emerging requirements like AI workload management.

Make your cloud native strategy documents interactive. Engage your team with Libertify.

Get Started Free →

OpenTelemetry and Observability as Strategic Cloud Native Pillars

The CNCF survey 2025 confirms that observability has evolved from a niche concern into a strategic pillar of cloud native operations. OpenTelemetry, the open-source observability framework, has become the second-highest-velocity CNCF project with more than 24,000 contributors. This level of community engagement — second only to Kubernetes itself — signals that the industry recognizes observability as foundational rather than supplementary to cloud native success.

OpenTelemetry’s rise reflects a fundamental shift in how organizations think about monitoring and debugging distributed systems. Traditional monitoring approaches relied on vendor-specific agents and proprietary data formats. OpenTelemetry provides a vendor-neutral standard for collecting traces, metrics, and logs, enabling organizations to instrument their applications once and send telemetry data to any backend of their choice. This flexibility eliminates vendor lock-in and creates a unified observability layer across heterogeneous environments.

The survey also reveals an emerging trend in profiling adoption. Nearly 20% of respondents now use profiling as part of their observability stack. Profiling — the practice of measuring resource consumption at the code level — provides deeper insights than traditional metrics by showing exactly which functions or code paths consume CPU, memory, or I/O. For AI workloads running on Kubernetes, profiling is particularly valuable because it can identify inefficiencies in model inference code, GPU utilization bottlenecks, and memory leaks that impact both performance and cost.

The strategic importance of observability grows exponentially as organizations deploy AI workloads alongside traditional applications. AI inference endpoints have unique monitoring requirements: model latency distributions, prediction confidence scores, data drift detection, and GPU utilization patterns all require specialized telemetry. OpenTelemetry’s extensible architecture allows organizations to capture these AI-specific signals within the same observability framework they use for all other services, maintaining a single pane of glass for operations teams.

For organizations building their cloud native observability strategy, the CNCF survey data validates the investment in OpenTelemetry as a long-term standard. With 24,000+ contributors, broad vendor support, and growing adoption across the ecosystem, OpenTelemetry is positioned to become as foundational to cloud native operations as Kubernetes is to compute orchestration.

Cultural Challenges Overtake Technical Complexity in Kubernetes Adoption

Perhaps the most surprising finding in the CNCF Cloud Native Survey 2025 is the inversion of adoption barriers. For the first time in the survey’s history, cultural challenges have overtaken technical complexity as the primary obstacle to cloud native adoption. “Cultural changes within the development team” is now the top challenge, cited by 47% of respondents. This surpasses traditional barriers like lack of training (36%), security concerns (36%), and technical complexity (34%).

This shift has profound implications for how organizations approach cloud native transformation. When technical complexity was the primary barrier, the solution was straightforward: better tooling, more documentation, managed services. But cultural barriers require a fundamentally different approach. They involve changing how teams communicate, how decisions are made, how risk is managed, and how success is measured. These are organizational change management challenges that cannot be solved with technology alone.

The cultural challenges manifest in several ways. Development teams accustomed to monolithic architectures and manual deployment processes must adapt to distributed systems thinking, microservices ownership, and automated deployment pipelines. Operations teams that traditionally controlled production environments must shift to enabling developer self-service through platforms and guardrails. Leadership must support the organizational restructuring that cloud native practices demand — from centralized control to distributed responsibility.

Hilary Carter, senior vice president of research at Linux Foundation Research, captured this dynamic precisely: “The next phase of cloud native evolution will be as much about people and platforms as it is about the tech itself. Organizations that invest in both will have a clear advantage.” This insight applies equally to AI adoption. The technical infrastructure for AI on Kubernetes is increasingly mature, but the organizational practices — model governance, responsible AI frameworks, ML engineering workflows — are still developing.

For enterprise leaders, the cultural challenge data provides a mandate to invest in change management alongside technology adoption. Training programs, internal champions, cross-functional collaboration structures, and clear communication of cloud native benefits at every organizational level become as critical as the technical architecture itself. Organizations that address cultural barriers proactively will accelerate their cloud native journey and, by extension, their AI capabilities. Resources like interactive digital transformation guides can help communicate complex technical changes in formats that resonate with non-technical stakeholders.

The Future of Cloud Native AI: What the CNCF Survey 2025 Means for Enterprise

The CNCF Annual Cloud Native Survey 2025 paints a clear picture of where the industry is heading. Kubernetes has established itself as the universal infrastructure layer, AI workloads are increasingly deployed on cloud native platforms, and the remaining barriers to adoption are organizational rather than technical. For enterprise decision-makers, these findings point to several strategic imperatives for the coming years.

First, Kubernetes investment is validated and should be expanded. With 82% production adoption and growing AI workload deployment, Kubernetes expertise and infrastructure represent durable competitive advantages. Organizations should deepen their Kubernetes capabilities, particularly in areas relevant to AI: GPU scheduling, model serving frameworks, and inference autoscaling.

Second, AI infrastructure should converge with cloud native infrastructure. The 66% of organizations already running AI inference on Kubernetes demonstrate that separate AI infrastructure stacks are unnecessary and inefficient. Platform engineering teams should extend their Kubernetes platforms to support AI workloads natively, leveraging the operational maturity they have built for traditional applications.

Third, GitOps and platform engineering are not optional. The clear correlation between GitOps adoption and cloud native maturity means that organizations delaying these practices are falling behind. Implementing GitOps workflows and building internal developer platforms should be priority investments for any organization serious about cloud native excellence.

Fourth, observability must be treated as a first-class concern. OpenTelemetry’s position as the second-highest-velocity CNCF project reflects the industry’s recognition that you cannot manage what you cannot measure. As AI workloads add complexity to production environments, comprehensive observability becomes even more critical.

Finally, cultural transformation must accompany technical adoption. The survey’s finding that cultural challenges now outweigh technical barriers is a call to action for leadership. Cloud native and AI success requires organizational alignment, not just technical implementation. Investing in people, processes, and communication is as important as investing in platforms and tools.

The cloud native ecosystem, anchored by Kubernetes and supported by projects like OpenTelemetry, Argo, and Backstage, has created a platform that is ready for the AI era. The question for enterprises is not whether to build on this foundation, but how quickly they can align their organizations to take full advantage of it. The full CNCF survey report provides the detailed data needed to inform these strategic decisions.

Transform the full CNCF survey into an interactive experience your leadership team will engage with.

Start Now →

Frequently Asked Questions

What does the CNCF Cloud Native Survey 2025 reveal about Kubernetes adoption?

The CNCF Annual Cloud Native Survey 2025 reveals that 82% of container users now run Kubernetes in production environments, up from 66% in 2023. Additionally, 98% of surveyed organizations have adopted cloud native techniques, and 59% report that much or nearly all of their development is now cloud native.

How is Kubernetes used for AI workloads in 2025?

According to the CNCF survey, 66% of organizations hosting generative AI models use Kubernetes to manage some or all of their inference workloads. Kubernetes has become the de facto operating system for AI, providing the scalable infrastructure needed to deploy and manage machine learning models in production.

What are the biggest challenges to cloud native adoption in 2025?

For the first time, cultural challenges have overtaken technical complexity as the primary barrier. Cultural changes within development teams is the top challenge cited by 47% of respondents, followed by lack of training (36%), security concerns (36%), and complexity (34%).

What role does GitOps play in cloud native maturity?

GitOps is a key indicator of cloud native maturity. The survey shows that 58% of cloud native innovators use GitOps principles extensively, compared to only 23% of organizations still in the adopter phase. GitOps enables declarative infrastructure management and automated deployment workflows.

Why is OpenTelemetry important for cloud native infrastructure?

OpenTelemetry has become the second-highest-velocity CNCF project with over 24,000 contributors. It provides a unified standard for collecting telemetry data including traces, metrics, and logs across cloud native environments, making observability a strategic pillar rather than just a tooling decision.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup

Our SaaS platform, AI Ready Media, transforms complex documents and information into engaging video storytelling to broaden reach and deepen engagement. We spotlight overlooked and unread important documents. All interactions seamlessly integrate with your CRM software.