From Open Internet to Open Intelligence: Why AI’s Market Structure Will Determine the Future of Innovation

📌 Key Takeaways

  • The Ladder Strategy: Big Tech used internet openness to grow, then pulled up the ladder behind them—they’re now repeating this with AI
  • Three Critical Chokepoints: Data access, computing power (~$5T investment by 2030), and model APIs control who can participate in AI
  • Application Economy Focus: AI’s real value lies in second-order effects and specialized applications, not just bigger foundation models
  • Market Concentration Risk: Closed models capture ~80% of usage and ~96% of revenue, creating privately governed intelligence
  • Policy Urgency: If we wait until AI dominance is complete, the debate about open vs. closed intelligence will be over

Ten years ago, the European Union established groundbreaking Open Internet Rules that helped preserve innovation and competition in the digital economy. But as former FCC Chairman Tom Wheeler warned in a landmark address to the European Parliament, the same forces that once championed internet openness are now building walls around artificial intelligence—and this time, they’re doing it from a position of unprecedented market power.

Wheeler’s analysis reveals a troubling pattern: the companies that used openness as a ladder to reach dominance in the platform era have now “pulled it up behind them” and are positioning themselves as the gatekeepers of the AI economy. With over $5 trillion potentially being invested in AI data centers by 2030, the infrastructure of intelligence is being built at breathtaking speed—with remarkably few constraints.

“It is the defining issue of the intelligence era,” Wheeler argues. The question isn’t just about AI safety or ethics—it’s about whether we’ll have “open intelligence” that enables broad-based innovation or “privately governed intelligence” controlled by a handful of dominant companies.

How Big Tech Exploited Openness Then Closed the Door

The story of internet openness reads like a cautionary tale for the AI era. In the early days of the internet, companies like Google, Amazon, and Facebook were fierce advocates for open standards, net neutrality, and nondiscriminatory access. These principles weren’t just idealistic—they were essential for these companies to grow and compete against established incumbents.

Google championed open internet principles because they needed equal access to reach users through any internet service provider. Amazon needed open protocols to build its e-commerce platform without gatekeepers controlling access. Facebook required open standards to create a social network that could scale globally without being blocked by intermediaries.

But once these companies achieved dominance, their relationship with openness changed dramatically. As Wheeler puts it: “They used openness as a ladder for themselves, and then pulled it up behind them.”

The transformation was systematic. Google began favoring its own services in search results. Amazon started using third-party seller data to compete against them with private-label products. Facebook acquired potential competitors and made it increasingly difficult for users to leave their ecosystem. The very companies that had benefited from internet openness began building walls to protect their market positions.

This strategy worked brilliantly—for them. By the time policymakers recognized the concentration of power, these platforms had become essential infrastructure for the digital economy. Any intervention risked disrupting systems that billions of people and millions of businesses had come to depend on.

Now, Wheeler warns, we’re watching the exact same playbook unfold in artificial intelligence—but this time, the companies aren’t starting from a position of weakness. They’re leveraging their existing dominance in data, computing, and distribution to control the emerging AI ecosystem from day one.

AI Is Infrastructure Transformation, Not Just Another Tech Cycle

To understand why AI’s market structure matters so much, Wheeler draws parallels to history’s greatest technological transformations. AI isn’t just another software upgrade—it’s comparable to the printing press, electrification, and the internet itself in terms of its potential to reshape society and the economy.

But there’s a crucial difference. Previous transformative technologies emerged in relatively competitive markets. When electricity was being developed, no single company controlled the infrastructure needed to generate, distribute, and utilize electrical power. When the internet was being built, multiple companies and institutions contributed to its development, and open standards ensured that no single entity could control access.

AI is developing in a very different context. The same companies that emerged victorious from the platform wars now sit at the center of the AI ecosystem. They control the cloud computing infrastructure, possess the largest data sets, have the resources to train the most capable models, and own the distribution channels that determine which AI applications reach users.

Wheeler emphasizes that “technology-driven transformation is less about the primary technology itself than its second-order effects that enable new uses, new processes, and new ways of working.” The real value of electrification didn’t come from power plants—it came from the factories, appliances, and systems that electricity enabled. Similarly, AI’s transformative potential lies not in foundation models themselves but in the applications they enable across every sector of the economy.

This is why market structure matters so much. If a few companies control access to AI capabilities, they effectively control which industries get transformed, which innovations reach the market, and which economic benefits get realized. The question becomes whether AI will be deployed to maximize social and economic value or to maximize the returns of a few dominant players.

The Application Economy: Where AI’s Real Value Lives

One of Wheeler’s most important insights is that we’re not building an AI economy—we’re building an “application economy” that happens to be powered by AI. The real transformation will come from specialized applications across manufacturing, logistics, energy, medicine, education, defense, journalism, and government.

This perspective shifts the policy focus from regulating AI models themselves to ensuring that the conditions exist for widespread application development and deployment. Wheeler argues that the current fixation on “ever bigger models” misses the point entirely. The value lies in diffusion—getting AI capabilities into the hands of innovators across every sector who can develop specialized solutions for specific problems.

Consider the historical parallel with electricity. The transformative impact didn’t come from building bigger power plants—it came from enabling every factory, household, and business to access electrical power and use it for their specific needs. Some applications, like electric motors in factories, revolutionized manufacturing. Others, like electric lights in homes, transformed daily life. The diversity of applications, not the concentration of generation, drove the transformation.

The same principle applies to AI. A hospital needs AI applications tuned for medical diagnosis, not generic conversational models. A manufacturing plant needs AI optimized for predictive maintenance and quality control. A school system needs AI designed for personalized learning, not one-size-fits-all chatbots.

This application-centric view reveals why market concentration is so problematic. If a few companies control access to AI capabilities through proprietary APIs and closed models, they become gatekeepers who determine which applications get built, which industries get transformed, and which innovations reach users.

Turn your business documents and reports into engaging, interactive experiences that communicate complex ideas clearly

Try It Free →

Wheeler’s framework suggests a clear policy principle: “Prevent bottlenecks and promote diffusion.” This means ensuring that AI capabilities remain accessible to developers, researchers, and organizations across all sectors, rather than being locked behind the proprietary systems of a few dominant companies.

Chokepoint #1: Data as the Fuel of Intelligence

The first critical chokepoint in the AI ecosystem is data access and control. Wheeler’s analysis reveals how the same data advantages that powered Big Tech’s dominance in the platform era are now being leveraged to control AI development.

The issue isn’t just about having large datasets—it’s about having the right kinds of data with the right characteristics for training AI systems. Social media platforms possess behavioral data that reveals how people interact, think, and make decisions. E-commerce companies have transactional data that shows consumer preferences and purchasing patterns. Search engines have query data that reveals what people want to know and when.

This data concentration creates multiple problems for AI competition. First, it means that companies with existing data advantages can train more capable AI models. Google’s search data gives it insights into human information-seeking behavior that competitors can’t easily replicate. Amazon’s e-commerce data provides understanding of consumer behavior that enables more effective AI-powered recommendations and predictions.

Second, the current data ecosystem lacks the portability and interoperability standards that would allow competition to flourish. When users can’t easily move their data between services, or when companies use proprietary data formats that lock out competitors, it becomes nearly impossible for new entrants to access the data they need to train competitive AI systems.

Wheeler points to specific policy solutions: data portability requirements that allow users to move their information between services, interoperability standards that prevent data lock-in, and shared access frameworks that could enable broader AI development while protecting privacy and security.

The stakes are enormous. Without addressing data concentration, Wheeler warns, “the AI ecosystem will calcify around entrenched incumbents.” New companies won’t be able to access the data needed to develop competitive AI capabilities, innovative applications won’t have the datasets required for specialized AI systems, and entire sectors of the economy may be unable to harness AI because they lack access to relevant training data.

This isn’t just a competition issue—it’s an innovation issue. Economic research shows that data diversity drives AI innovation. When AI systems are trained on narrow datasets controlled by a few companies, they reflect the biases and limitations of those specific data sources. Broader data access leads to more robust, fair, and capable AI systems that serve society better.

Chokepoint #2: The $5 Trillion Compute Concentration

The second chokepoint is perhaps the most capital-intensive: control over computing infrastructure. Wheeler cites McKinsey estimates suggesting that over $5 trillion could be spent globally on AI data centers by 2030, representing one of the largest infrastructure build-outs in human history.

The scale of investment required creates natural barriers to entry, but Wheeler’s analysis shows how market structure amplifies these barriers into insurmountable moats. Currently, approximately two-thirds of the world’s cloud computing is controlled by just three companies: Amazon Web Services, Microsoft Azure, and Google Cloud Platform.

This concentration means that access to computing power—essential for both training AI models and running AI applications—is controlled by the same companies that dominated the platform era. What starts as a technical advantage quickly becomes a strategic moat through several mechanisms that Wheeler identifies:

Preferential Access: Cloud providers can give their own AI development teams priority access to the most advanced computing resources, especially during periods of high demand or limited availability of specialized AI chips.

Bundling and Integration: Cloud providers can bundle computing resources with other services (data storage, AI APIs, developer tools) in ways that make it difficult for competitors to match the full value proposition.

Opaque Pricing: Complex pricing structures can make it difficult for customers to compare alternatives or for regulators to identify discriminatory practices.

Wheeler argues that the current focus on scaling to ever-larger AI models plays into this concentration dynamic. The narrative that “bigger is always better” in AI justifies massive computing investments that only the largest companies can afford, while potentially overlooking more efficient approaches that could democratize AI access.

The policy challenge is particularly complex because computing infrastructure is a legitimate area of investment and innovation. Unlike data or software, computing resources require real physical infrastructure that costs money to build and maintain. The question is how to ensure that this infrastructure serves broad innovation rather than becoming a tool for market control.

Wheeler suggests that the key is preventing computing power from becoming a tool for discriminatory access. This might involve requirements for nondiscriminatory pricing, limitations on bundling practices that foreclose competition, or even public investment in alternative computing infrastructure for research and development.

Chokepoint #3: Models and APIs as Governance Mechanisms

The third and potentially most powerful chokepoint is control over AI models and the APIs that provide access to them. Wheeler’s analysis reveals how this layer of the AI stack functions as a governance mechanism, allowing model owners to effectively control the evolution of AI itself.

The numbers tell a stark story. Research from MIT and Georgia Tech shows that closed AI models capture approximately 80% of usage and 96% of revenue in the AI model market. This isn’t just market dominance—it’s near-monopolization of the most valuable layer of the AI stack.

Wheeler explains how APIs function as more than technical interfaces—they’re governance structures. When developers build applications that depend on proprietary AI APIs, they’re not just accessing computational resources; they’re accepting a governance relationship where the API owner can change terms, modify capabilities, alter pricing, or even terminate access at will.

A concrete example illustrates the power dynamics at play. When OpenAI changed its GPT-3.5 API endpoints, hundreds of dependent applications faced costly migrations or service disruptions. This wasn’t just a technical update—it was a demonstration of how API control allows model owners to shape the entire ecosystem of applications built on their platforms.

The governance implications extend far beyond individual business relationships. As Wheeler puts it: “By controlling their APIs, owners of closed AI models can effectively govern the evolution of AI itself.” They determine which types of applications get support, which use cases receive priority development, and which innovations are allowed to reach the market.

This governance power is particularly concerning because it operates largely outside traditional regulatory frameworks. When a government agency or platform company makes policy decisions, there are typically processes for public input, appeals, or legislative oversight. When an AI model owner changes API terms, there’s no appeal process, no public consultation, and no regulatory oversight.

Wheeler argues that addressing this chokepoint requires reconceptualizing APIs as essential infrastructure rather than purely private services. This might involve requirements for API stability and predictability, standardized interfaces that prevent lock-in, or even mandated access for certain types of research or public interest applications.

Create compelling presentations and analysis that help stakeholders understand complex policy issues and market dynamics

Get Started →

The Toll Booth Problem in the AI Stack

Wheeler’s analysis culminates in what he calls the “toll booth” problem—how control over multiple layers of the AI stack allows incumbent companies to extract value from every transaction while suppressing innovation and competition. His framework reveals how the AI ecosystem is being structured to funnel economic value toward a few dominant players.

The AI stack, as Wheeler diagrams it, flows from chips to cloud computing to models to applications. At each layer, incumbent companies are positioning themselves as essential intermediaries who must be paid for access to the next level. It’s not just about owning each layer—it’s about controlling the connections between layers in ways that make competition nearly impossible.

Consider how this works in practice. A startup wants to develop an AI application for healthcare. They need access to computing resources (controlled by cloud providers), AI models (controlled by model developers), and data (controlled by platform companies). At each step, they face toll booths operated by the same small group of companies.

The toll booth metaphor is particularly apt because it captures how incumbent control doesn’t just extract economic value—it shapes the entire ecosystem. Highway toll booths don’t just collect money; they influence traffic patterns, determine which routes get developed, and affect where businesses and communities locate. Similarly, AI toll booths don’t just extract revenue; they determine which applications get built, which innovations reach the market, and which industries get transformed by AI.

Wheeler identifies four specific harms from this structure:

Innovation Suppression: When every layer of the stack is controlled by incumbents, truly disruptive innovations are less likely to emerge because they threaten existing revenue streams.

Cost Inflation: Multiple layers of toll collection drive up the cost of AI applications, making them less accessible to smaller organizations and limiting adoption across the economy.

Risk Concentration: When critical AI infrastructure is controlled by a few companies, system failures, security breaches, or poor decisions can cascade across the entire ecosystem.

Innovation Slowdown: Centralized control can slow the pace of improvement because incumbent companies may prioritize protecting existing advantages over pursuing breakthrough innovations that could disrupt their market positions.

The toll booth problem explains why simply having “competition” between a few large AI companies isn’t sufficient. When these companies control different layers of the same stack and have incentives to maintain barriers to entry, competition becomes more about dividing up economic rents than about driving innovation and reducing costs for users.

International Cooperation vs. Digital Mercantilism

Wheeler’s analysis extends beyond domestic policy to address one of the most contentious issues in AI governance: how to balance national competitiveness with international cooperation. His framework directly challenges the “digital mercantilism” approach that has gained popularity in some policy circles.

The mercantilist approach treats AI development as a zero-sum competition between nations, where protecting domestic AI companies from foreign competition is seen as essential for national security and economic competitiveness. This has led to export controls, investment restrictions, and other policies designed to prevent AI capabilities from flowing to potential adversaries.

Wheeler argues that this approach is fundamentally misguided for several reasons. First, AI is inherently transnational. The datasets used to train AI models, the applications that deploy AI capabilities, and the problems that AI solves all cross national boundaries. Attempting to build purely domestic AI ecosystems ultimately limits access to the global resources and markets that drive innovation.

Second, Wheeler contends that domestic competition is a prerequisite for international competitiveness. Countries with concentrated, protected AI markets are likely to fall behind countries with competitive, open AI ecosystems. The companies that emerge from competitive domestic markets are typically more innovative, efficient, and capable of succeeding globally than companies that are protected from competition.

The policy prescription is “compatible, not identical” oversight across democratic systems. Wheeler acknowledges that different countries will have different regulatory approaches, but argues that these approaches should be compatible enough to allow innovation and competition to flow across borders among democratic allies.

This aligns with broader economic research showing that international cooperation and compatible standards drive innovation faster than protectionist approaches. The internet’s global success came precisely from international cooperation on technical standards and governance frameworks, not from national attempts to build separate, protected digital ecosystems.

Wheeler’s framework suggests that the real competition is not between democratic nations but between democratic and authoritarian approaches to AI governance. Democratic countries share fundamental values around human rights, rule of law, and economic openness that should inform their AI policies. Fragmenting this democratic AI ecosystem in the name of national competitiveness ultimately weakens all democratic countries relative to authoritarian alternatives.

Open Intelligence vs. Privately Governed Intelligence

Wheeler’s most important conceptual contribution is reframing the central policy choice around AI. Rather than debating innovation versus oversight, or growth versus regulation, he argues that the fundamental choice is between “open intelligence” and “privately governed intelligence.”

Open intelligence, in Wheeler’s framework, extends the principles of the open internet to the AI era. It means nondiscriminatory access to AI capabilities, interoperability between AI systems, data portability that prevents lock-in, and transparent governance processes that include public input. Just as the open internet enabled anyone to publish content, access information, or build applications, open intelligence would enable broad participation in AI development and deployment.

Privately governed intelligence represents the alternative path we’re currently heading toward. In this model, a few companies control access to AI capabilities, determine which applications can be built, set the terms for AI usage, and make governance decisions behind closed doors. Users and developers become subjects of private AI governance rather than participants in an open AI ecosystem.

The stakes couldn’t be higher. Wheeler argues that AI represents “the greatest engine of productivity since electrification,” but only if we prevent bottlenecks from hardening into permanent gatekeepers. If AI development continues on its current path, we risk creating an intelligence infrastructure that serves the interests of a few dominant companies rather than maximizing social and economic value.

Wheeler’s framework shows how this isn’t just about competition or innovation—it’s about power and governance in the intelligence era. The companies that control AI infrastructure don’t just profit from it; they determine whose voices get heard, whose problems get solved, and whose innovations reach the public.

The parallel to earlier infrastructure is telling. We decided long ago that essential infrastructure like electricity, telecommunications, and transportation should be subject to public oversight and nondiscriminatory access requirements. Wheeler argues that AI is becoming similarly essential infrastructure, but we’re allowing it to be governed by purely private interests.

The choice between open and privately governed intelligence isn’t abstract—it’s being made right now through specific policy decisions about data portability, API access, computing infrastructure, and market structure. Each choice moves us further toward one model or the other, and Wheeler warns that the window for choosing openness may be closing rapidly.

Make your policy analysis and recommendations more engaging with interactive content that helps stakeholders understand complex issues

Start Now →

The Defining Policy Choice of the Intelligence Era

Wheeler’s analysis concludes with an urgent call to action for policymakers. The current moment represents a unique opportunity to shape AI’s development trajectory, but that opportunity won’t last forever. As he puts it: “If we wait until dominance is complete, the debate will be over—not because the public decided, but because the powerful already did.”

The urgency stems from the nature of network effects and infrastructure investments in technology markets. Once AI infrastructure hardens around particular companies and approaches, switching costs become enormous. Applications built on proprietary APIs become locked in. Data accumulates in closed systems that resist interoperability. Computing investments create sunk costs that justify further concentration.

Wheeler’s historical analysis shows how this pattern has played out before. By the time policymakers recognized the concentration problems in social media and digital advertising, these markets had become so entrenched that intervention became exponentially more difficult. Users had invested years building networks and content on particular platforms. Businesses had integrated their operations around specific advertising systems. Developers had built applications that depended on platform-specific APIs.

The AI ecosystem is at a much earlier stage, which creates both opportunity and urgency. We can still shape how AI infrastructure develops, but only if we act before the current trajectory becomes irreversible. Wheeler argues that this requires moving beyond reactive regulation toward proactive framework-setting that guides AI development in constructive directions.

The policy framework Wheeler outlines isn’t about slowing AI development or limiting innovation. Instead, it’s about ensuring that AI’s transformative potential benefits society broadly rather than concentrating power and profits among a few dominant companies. This means focusing on market structure issues that enable competition and innovation rather than trying to regulate AI applications directly.

Wheeler emphasizes that getting market structure right is the prerequisite for addressing other AI challenges. Issues like bias, safety, privacy, and security are all easier to address in competitive markets where companies face pressure to serve users well than in concentrated markets where dominant companies face little competitive pressure.

The choice, as Wheeler frames it, isn’t between innovation and regulation—it’s between different models of how innovation happens. Competitive markets with appropriate oversight tend to produce more innovation, faster improvement, and better outcomes for users than concentrated markets with minimal oversight.

Wheeler’s concluding message is both hopeful and urgent. We have the tools and knowledge to guide AI development toward open intelligence rather than privately governed intelligence. We have historical precedent from other infrastructure transformations. We have policy frameworks that can promote innovation while preventing concentration. But we need to act while the infrastructure is still being built, not after it’s already hardened into place.

The defining issue of the intelligence era, Wheeler argues, is whether we’ll have the wisdom and political will to choose open intelligence over private governance. The decision we make will shape not just the AI industry, but the broader trajectory of technological development and economic power for generations to come.

Frequently Asked Questions

What are the three AI chokepoints that Tom Wheeler identifies?

Wheeler identifies three critical chokepoints in the AI ecosystem: 1) Data access and portability, 2) Computing power and cloud infrastructure, and 3) Model APIs and distribution platforms. These chokepoints allow a few companies to control who can participate in the AI economy.

How much could be spent globally on AI data centers by 2030?

According to McKinsey estimates cited by Wheeler, over $5 trillion could be spent globally on AI data centers by 2030, representing a massive infrastructure buildout that could concentrate computing power further.

What percentage of AI model usage and revenue do closed models capture?

Research from MIT and Georgia Tech shows that closed AI models capture approximately 80% of usage and 96% of revenue in the AI model market, demonstrating the dominance of proprietary over open models.

What does Wheeler mean by ‘open intelligence’ as opposed to ‘privately governed intelligence’?

Open intelligence refers to an AI ecosystem built on principles of nondiscriminatory access, data portability, and interoperability—similar to how the open internet enabled innovation. Privately governed intelligence means AI development controlled by a few dominant companies who can determine which applications and innovations reach the public.

Why does Wheeler argue that international cooperation is better than digital mercantilism in AI policy?

Wheeler argues that AI is inherently transnational and protectionist approaches ultimately hurt the nations they claim to protect. Domestic competition is a prerequisite for international competitiveness, and compatible (not identical) oversight across democratic systems is needed to maintain innovation while preventing concentration.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup