—
0:00
How AI Could Reshape Competitions in Future Warfare
Table of Contents
- Why AI Future Warfare Demands a New Strategic Framework
- The Four Building Block Competitions AI Reshapes
- AI and the Quantity Versus Quality Revolution
- Precise Mass and Affordable Mass in AI Warfare
- The Fog of War Machine: AI-Powered Deception
- Why the Transparent Battlefield Is a Myth
- AI Future Warfare and Mission Command
- Cyber Defense Gains the AI Advantage
- Policy Recommendations for Military AI Competition
- What This Means for Global Defense Strategy
📌 Key Takeaways
- Quantity over quality: AI-enabled autonomous systems shift the cost-effectiveness calculus decisively toward fielding larger numbers of cheaper platforms rather than small fleets of exquisite weapons.
- Deception beats detection: The computationally harder task belongs to the finder, not the hider — AI-powered fog of war machines can generate confusion faster than sensors can resolve it.
- Mission command endures: AI does not eliminate the information asymmetry between theater and tactical levels, preserving the advantages of decentralized execution.
- Cyber defense wins long-term: AI can dramatically reduce software vulnerabilities, shrinking attack surfaces from 3 per 10,000 lines of code to potentially 3 per 100,000 lines.
- Organizational change is decisive: The military that best manages the transition to AI-enabled operations will hold the advantage, regardless of who achieves initial technological breakthroughs.
Why AI Future Warfare Demands a New Strategic Framework
Artificial intelligence is poised to reshape the fundamental dynamics of military competition in ways that extend far beyond autonomous drones and killer robots. A landmark 2026 report from the RAND Corporation — one of the world’s most influential defense research institutions — provides a rigorous analytical framework for understanding how AI future warfare will transform four foundational competitions that have defined armed conflict for centuries.
The researchers adopt a deliberately provocative analytical heuristic: what happens when AI removes the limits of human cognitive capacity as a constraint on military operations? They do not assume superintelligence, but rather human-level performance across a wide variety of cognitive tasks. Physics and information theory still apply. The enemy still adapts. And both sides have access to advanced AI capabilities.
This framework matters because much of the current discourse around AI in defense oscillates between utopian visions of perfectly transparent battlefields and dystopian fears of autonomous weapons run amok. The RAND analysis cuts through both extremes to identify where AI creates durable structural advantages — and where it merely accelerates existing dynamics. For anyone seeking to understand the interactive analysis of complex defense research, this report represents essential reading.
The timeframe under consideration is roughly 10 to 20 years — long enough for AI capabilities to mature significantly, but grounded in technologies and physical constraints that are well understood today. The analysis specifically addresses peer competition where neither side enjoys overwhelming resource advantages nor a monopoly on advanced AI.
The Four Building Block Competitions AI Reshapes
RAND’s analytical framework identifies four fundamental “building block” competitions that underpin warfare across domains and eras. Each represents a binary tension where AI could shift the balance in one direction or another. Understanding these shifts is critical for defense planners, policymakers, and technologists alike.
The first competition pits quantity against quality — whether it is better to field many adequate systems or fewer superior ones. The second weighs hiding against finding — the eternal contest between concealment and surveillance. The third examines centralized versus decentralized command and control — how decisions should flow through military hierarchies. And the fourth considers cyber offense versus cyber defense — whether attackers or defenders hold the structural advantage in digital warfare.
What makes RAND’s approach distinctive is its application of four specific mechanisms through which AI creates military impact: insight (analyzing information and generating knowledge), autonomy (replacing human labor), management (coordinating complicated actions across many systems), and decision support (improving the quality of human decisions). Each mechanism operates differently across the four competitions, creating a rich analytical matrix that reveals non-obvious implications.
The report draws on historical data from World War II aerial combat, current Ukraine conflict data, recent AI flight tests, and formal mathematical models including Lanchester equations and Bayesian network theory. This combination of empirical evidence and rigorous modeling gives the analysis a depth that transcends typical think-tank commentary on military AI.
AI and the Quantity Versus Quality Revolution
Perhaps the most consequential finding in the RAND report is that AI could produce greater relative advantages for quantity over quality — a dramatic reversal of the trend that has dominated Western military thinking since the Cold War. For decades, the United States and its allies have invested in smaller numbers of increasingly sophisticated weapons systems: stealth fighters, precision-guided munitions, and networked battle management systems designed to multiply the effectiveness of individual platforms.
AI disrupts this calculus through two complementary mechanisms that the researchers term precise mass and affordable mass. Precise mass, a concept borrowed from political scientist Michael Horowitz, describes how cheaper one-way attack drones now achieve precision levels previously reserved for expensive guided missiles and crewed strike aircraft. Affordable mass refers to how AI-enabled robotic systems make pursuing quantitative superiority cost-effective in ways that were previously impractical.
The evidence is compelling. Using Lanchester Square Law equations applied to historical air combat data, the researchers analyze engagements between German Me 262 jet fighters and American P-51D Mustangs in the final months of World War II. Drawing from combat data on March 18 and April 10, 1945 — involving 1,415 U.S. fighter sorties against 92 German Me 262 sorties — they estimate the Me 262 held approximately a 9:1 lethality advantage over the P-51D, combining its revolutionary jet propulsion with experienced Luftwaffe pilots.
Under the Lanchester Square Law, this 9:1 lethality advantage meant the Me 262 could overcome being outnumbered only 3:1. The Americans fielded more than 15 times as many sorties, overwhelming German quality with sheer numbers. The lesson for AI future warfare is stark: even enormous quality advantages create relatively modest thresholds that quantity can overcome.
Explore the full RAND report as an interactive experience — navigate key findings, data visualizations, and policy recommendations at your own pace.
Precise Mass and Affordable Mass in AI Warfare
The cost dynamics of AI-enabled warfare fundamentally change the quantity-quality tradeoff. RAND’s cost analysis reveals that one Chengdu J-20 — China’s fifth-generation stealth fighter with an empty weight of 42,750 pounds — costs approximately 4.71 times as much as a notional autonomous robotic fighter with characteristics comparable to the XQ-58 Valkyrie drone at 2,500 pounds empty weight.
This cost ratio dramatically expands the region where robotic mass prevails at a cost advantage over exquisite crewed fighters. In the World War II analogy, the Me 262-to-P-51D cost ratio was only about 1.5:1, meaning the American quantity strategy was actually more expensive per engagement. AI changes this equation entirely — cheaper platforms can now be fielded in far greater numbers while maintaining enough capability to overwhelm qualitatively superior adversaries.
The mathematical implications are precise. RAND demonstrates that a force of 401 autonomous drones would defeat 200 drones even if the 200 were four times as lethal per engagement. The calculus shifts the focus from individual probability of kill to salvo probability of kill — what matters is not whether any single weapon hits its target, but whether the combined attack achieves the desired effect.
Real-world validation is already emerging. In 2024, an AI-controlled F-16 put up a “roughly even fight” against a crewed F-16 with an expert human pilot. Ukraine has been losing as many as 10,000 drones per month to Russian jamming — a staggering attrition rate that only quantity-focused approaches can sustain. These data points confirm the trend RAND identifies: the future belongs to those who can field mass effectively, not those who build the most sophisticated individual platforms.
Geography further compounds these dynamics. RAND’s sortie rate modeling shows that in a Taiwan contingency, China holds a 2.7:1 force advantage along its coast due to the proximity of Chinese air bases, diminishing to 1.4:1 at greater distances. Overcoming this requires either a dramatically larger U.S. force structure or a fundamentally new approach using runway-independent autonomous combat vehicles distributed across allied territory.
The Fog of War Machine: AI-Powered Deception
One of the report’s most original contributions is the concept of the fog of war machine — AI-enabled planning and battle-management software that helps military operators develop and execute elaborate deception campaigns at machine speed. This concept draws on historical precedents like Operation Fortitude, the Allied deception campaign that convinced Germany the D-Day invasion would target Pas-de-Calais rather than Normandy, and Operation Bagration, the Soviet deception that concealed a massive offensive on the Eastern Front.
The fog of war machine integrates low-tech measures — attacking enemy sensors, controlling electromagnetic emissions, deploying basic physical decoys — with high-tech capabilities including autonomous robotic decoys, cyberattacks to corrupt sensor data, and the selective revelation of true information designed to mislead. The AI orchestrates all of these simultaneously, adjusting the deception plan in real time based on observed adversary behavior.
This represents a fundamental shift from how Western militaries have approached deception since World War II. As the RAND authors note, the United States has treated deception as an afterthought for decades, investing primarily in stealth and concealment rather than active misdirection. AI makes sophisticated deception operationally feasible at a scale and tempo that was previously impossible without enormous human staff effort. For defense professionals exploring how complex research translates into actionable intelligence, this concept has profound implications.
The analogy the researchers use is vivid: “the finder needs to solve a puzzle, and the hider is making that harder by removing some puzzle pieces from the box, adding fake puzzle pieces at the same time, and trying to simultaneously change the picture so that solving the puzzle will result in an outdated solution.” This asymmetry — where increasing uncertainty is computationally easier than resolving uncertainty — gives the hider a structural advantage that AI amplifies rather than eliminates.
Why the Transparent Battlefield Is a Myth
The vision of a “transparent battlefield” where AI-enabled sensors can find and track everything has been a recurring theme in defense discourse for three decades. In 1996, U.S. Air Force Chief of Staff General Ronald Fogleman predicted that by the early 21st century, it would “be possible to find, fix or track, and target anything that moves on the surface of the earth.” More recently, the head of U.S. Army Futures Command predicted that by 2040, “the ability to hide, which is fundamental to how we fight, [will be] impossible.”
RAND’s analysis provides a rigorous theoretical basis for why these predictions are fundamentally wrong. The key insight rests on computational complexity theory: information fusion is equivalent to inference in Bayesian belief networks, which is provably NP-hard. This means no computationally tractable approximation algorithm can guarantee solutions within a given time interval. Even a hypothetical superintelligent AI faces this constraint — it is a mathematical limit, not an engineering problem waiting for more processing power.
Three variables determine whether the hider or finder holds the advantage in any given scenario. First, mass — which side can field more sensors or decoys. Second, information type — it is harder to hide the general positions of large formations but easier to create uncertainty about exact locations of mobile high-value targets. Third, domain and environment — ground and undersea environments naturally favor hiding, air and sea surfaces are intermediate, and space offers the weakest conditions for deception due to the predictability of orbital mechanics.
The practical implication is that militaries investing exclusively in advanced sensing capabilities while neglecting deception are making a strategic error. The RAND report argues forcefully that the United States must develop both sides of this competition — better sensors and better fog of war machines — to maintain competitive advantage against peer adversaries who will certainly be investing in both.
Transform dense defense research into engaging interactive experiences your team will actually read and discuss.
AI Future Warfare and Mission Command
Perhaps surprisingly, RAND concludes that AI will not fundamentally change the advantages of mission command — the principle of centralized control with decentralized execution — over either more centralized or more decentralized command and control paradigms. This finding challenges both those who believe AI will enable perfect centralized control and those who argue AI swarms make traditional command structures obsolete.
The reasoning centers on a crucial distinction: the limiting factor in military command is access to information, not cognitive capacity to process it. Theater-level commands have greater awareness of the operational environment — diplomatic constraints, strategic objectives, resource availability across the force. Forward tactical commands have greater awareness of specific engagement conditions — terrain, enemy disposition, local weather, the morale and readiness of their own units.
This information asymmetry persists even with perfect AI because it is rooted in physical reality. Communications bandwidth remains limited, especially in contested electromagnetic environments where adversaries actively jam signals. Enemy action can sever communications entirely. And critically, much of the knowledge that makes tactical execution effective is tacit knowledge — shared contextual understanding that is rarely explicitly communicated and extremely difficult to transmit digitally.
AI-enabled swarming, rather than replacing mission command, is naturally compatible with it. A human commander sets the intent and boundaries; the AI swarm determines the optimal decentralized execution within those parameters. This mirrors the traditional relationship between a commanding officer issuing mission orders and subordinate units determining how best to accomplish the mission given their local conditions. The U.S. Department of Defense has increasingly recognized this compatibility in its approach to autonomous systems development.
Cyber Defense Gains the AI Advantage
In the domain of cyber warfare, RAND’s analysis reaches a counterintuitive conclusion: AI benefits defense more than offense in the long term. This finding challenges the conventional wisdom that cyber attackers hold a permanent structural advantage because they need to find only one vulnerability while defenders must protect every potential entry point.
The argument rests on a formal model of software vulnerabilities. Each system has N lines of code containing V exploitable vulnerabilities. The current industry rule of thumb is approximately 3 exploitable vulnerabilities per 10,000 lines of code. The probability that both an attacker and defender independently discover the same vulnerability is 1/V² — when V is large, closing this gap requires disproportionate defender resources.
AI changes this equation by enabling defenders to produce less error-prone code in the first place. If AI-assisted development reduces vulnerability density from 3 per 10,000 lines to 3 per 100,000 or even 3 per 1,000,000 lines, the defender’s task becomes dramatically more manageable. The attack surface shrinks by orders of magnitude, and the remaining vulnerabilities become harder for attackers to discover even with AI assistance.
The adoption trajectory supports this analysis. By 2023, 92 percent of U.S. software developers reported using AI coding tools. By 2024, Google claimed AI generated more than 25 percent of its new code. As these tools mature, their ability to identify and prevent common vulnerability patterns — buffer overflows, injection attacks, authentication flaws — will improve significantly.
However, RAND includes an important caveat: in the short term, AI coding tools may actually increase vulnerabilities. Early research shows AI-generated code can introduce new problems, including “slopsquatting” — where AI hallucinates package names that malicious actors then register and populate with malware. The transition period requires careful management, with AI tools used to augment rather than replace human code review processes. Organizations tracking this evolution through interactive research experiences can stay current on these rapidly evolving dynamics.
Policy Recommendations for Military AI Competition
The RAND report concludes with nine specific policy recommendations that together constitute a blueprint for how the United States should prepare for AI-enabled warfare. The overarching message is that organizational change matters more than technological breakthroughs — the military that best manages the transition to an AI-enabled force will hold the decisive advantage.
First, the U.S. should begin investing immediately in mass and new deception capabilities, even while AI technology continues to mature. Opportunities exist today with current technology. Second, defense planners should allocate resources assuming sophisticated, adaptive adversaries — resisting the temptation to pursue an unassailable AI first-mover advantage that is unlikely to materialize given the global diffusion of AI capabilities.
Third, the Department of Defense needs a comprehensive transition plan that addresses the balance between exquisite capabilities and robotic mass, how to field AI tools that balance speed with trust and reliability, and how to train human personnel and AI systems in symbiotic ways.
Fourth, force planning should shift toward greater mass while retaining a high-low mix — moving from one-to-one replacement thinking to examining how combinations of different systems provide greater cost-effectiveness. Fifth, new requirements for deception must be incorporated into force planning from the beginning, rather than asking how to hide platforms after they are designed.
The remaining recommendations address countering adversary mass through electronic warfare and directed-energy weapons, solving logistics challenges for mass-based force structures through autonomous logistics platforms, continuing reforms to the defense industrial base to enable higher-volume production, and investing in AI-enabled cybersecurity with an emphasis on pre-deployment vulnerability reduction.
What This Means for Global Defense Strategy
The RAND report’s implications extend well beyond the Pentagon. Every nation developing military AI capabilities faces the same fundamental building block competitions, and the analytical framework applies universally. The report’s central insight — that AI’s most significant military implications lie in its effects on the underlying economics of warfare — suggests that defense budgets worldwide will need to be restructured around mass production capabilities rather than boutique weapons programs.
For allied nations, the implications are particularly significant. If the United States shifts toward mass-focused force structures, alliance burden-sharing calculations change dramatically. Smaller allies may find their niche not in fielding expensive crewed platforms but in producing and operating large numbers of autonomous systems — a model that aligns well with the industrial capabilities of many advanced economies.
The report also carries implications for arms control and international law. Autonomous weapons systems operating in large numbers raise distinct ethical and legal questions compared to individual precision weapons. The fog of war machine concept, while operationally compelling, complicates efforts to distinguish military deception from violations of international humanitarian law regarding perfidy.
Perhaps most importantly, RAND warns that “successfully navigating the AI adoption process might be even more challenging and important than achieving the initial innovative breakthroughs with advanced AI, and it is far from clear that the United States will have an edge in the adoption competition.” This sobering assessment suggests that the decisive competition in AI future warfare is not technological but organizational — a contest of institutional agility, strategic vision, and the willingness to embrace fundamental change.
As this analysis demonstrates, understanding complex defense research requires more than surface-level summaries. The interplay between mathematical models, historical data, and strategic reasoning demands careful engagement with primary sources. Tools that transform dense policy documents into interactive experiences play an increasingly vital role in making this caliber of research accessible to broader audiences.
Make complex defense research accessible to your entire organization with interactive document experiences.
Frequently Asked Questions
How will AI change the balance between quantity and quality in future warfare?
AI enables ‘precise mass’ and ‘affordable mass’ by making cheaper autonomous systems highly effective. RAND analysis shows a J-20 fighter costs 4.71 times an autonomous drone like the XQ-58 Valkyrie, meaning militaries can field far more AI-enabled platforms without sacrificing critical precision capabilities.
Will AI create a transparent battlefield where nothing can hide?
No. RAND researchers demonstrate that sensor fusion is equivalent to inference in Bayesian belief networks, which is provably NP-hard. This means even superintelligent AI faces fundamental computational limits. AI-powered ‘fog of war machines’ can generate deception faster than adversaries can resolve it.
What is the fog of war machine concept in AI warfare?
A fog of war machine is AI-enabled planning software that helps military operators develop and execute elaborate deception schemes using autonomous decoys, cyberattacks to corrupt sensor data, and selective revelation of true information to maximize adversary confusion.
How does AI affect military command and control structures?
AI does not eliminate the advantages of mission command (centralized intent, decentralized execution). Information asymmetry between theater-level and tactical commands persists because access to information, not cognitive capacity, is the limiting factor. AI-enabled swarms are compatible with mission command principles.
Will AI favor cyber attackers or defenders in future conflicts?
In the long term, AI benefits cyber defense more than offense. AI can reduce software vulnerabilities from the current rate of 3 per 10,000 lines of code to potentially 3 per 100,000 lines, dramatically shrinking the attack surface. However, short-term risks exist as AI coding tools may initially introduce new vulnerabilities.
What policy changes does RAND recommend for the US military regarding AI?
RAND recommends investing in mass production of autonomous systems, developing AI-powered deception capabilities, shifting force planning toward quantity while maintaining a high-low mix, reforming the defense industrial base for higher-volume production, and prioritizing AI-enabled cybersecurity.