AI Cybersecurity and National Security: Key Insights from RAND’s Urgent Warning

📌 Key Takeaways

  • Unprecedented Scale: AI could amplify operationally useful intelligence by more than seven orders of magnitude, transforming cybersecurity warfare fundamentally
  • US Vulnerabilities Exposed: 90% of software managing the US electric grid contains code contributions from Chinese or Russian developers with critical vulnerabilities
  • Speed Determines Outcome: The direction of AI’s impact on cybersecurity depends not on the technology itself but on who assimilates it faster — and less fastidious adversaries may move first
  • Structural Failures: No dedicated service exists for cybersecurity; US Cyber Command’s budget is one-tenth that of traditional military branches
  • Urgent Reform Needed: The report recommends mandatory 120-day government pre-release access to frontier AI models and creation of dedicated cyber career paths

Why RAND Says AI Cybersecurity Is a National Emergency

In July 2025, the RAND Corporation published one of the most consequential national security papers of the decade. Authored by Richard Danzig — the 71st Secretary of the Navy and a senior fellow at Johns Hopkins University Applied Physics Laboratory — “Artificial Intelligence, Cybersecurity, and National Security: The Fierce Urgency of Now” delivers a stark warning to policymakers: the convergence of AI and cybersecurity represents a transformation as profound as the nuclear revolution, and the United States is dangerously unprepared.

Danzig’s central thesis cuts through the noise of AI hype with surgical precision. He argues that while government officials grasp the general truth that AI will be transformative, this understanding alone is insufficient. Drawing a vivid historical parallel, he writes: “Like Europeans reacting 500 years ago to first reports of a new world, even experts and certainly laypersons (including government officials) can have only naive intuitions about what is to come.” The paper examines not abstract possibilities but concrete, present dangers — what Danzig calls “the wolves closest to the sled” — created by human strategies to develop and exploit AI in competition with each other.

This analysis arrives at a critical inflection point. As AI-powered systems reshape industries from finance to defense, understanding the cybersecurity implications has become essential for every organization, not just government agencies. The RAND report provides the most authoritative roadmap to date for navigating this new reality.

AI as the New Assembly Line for Software Production

Danzig frames the AI revolution through an illuminating historical analogy: large language models are to software what Henry Ford’s assembly line was to manufacturing. Just as Ford combined large hardware investments with a revolutionary organizational technology to achieve high-rate mass production, LLMs combine massive computational infrastructure (graphics processing units) with generative learning to become what Danzig calls “consummate producers of software.”

But the analogy only goes so far. Danzig identifies three critical ways LLMs surpass Ford’s achievement:

  • Individualized mass production: While Ford famously offered customers “any color so long as it’s black,” LLMs enable mass production of individualized products — each piece of software tailored to specific needs
  • Entering a software-dependent world: AI arrives into a world already shaped by and dependent on software. Through that software’s connection to the internet, “AI is a mechanism for controlling other machines and therefore potentially anything and nearly everything”
  • Recursive self-improvement: Because AI is itself software, it is being used to improve AI. This creates the prospect of recursive self-improvement without substantial human intervention — something no previous industrial technology could achieve

Danzig reminds us of the stakes: America’s lead in mass production “determined, as much or more than any other single quality, the outcome of World War II.” The nation that leads in AI-automated software production will hold a comparable strategic advantage in the conflicts of the 21st century.

The indeterminate nature of generative AI compounds these concerns. Quoting Anthropic CEO Dario Amodei, Danzig notes that AI systems “are grown more than they are built — their internal mechanisms are ’emergent’ rather than directly designed.” This creates opacity (we don’t know precisely why AI makes its choices), risk of unintended evolution, and a need for continuous adaptation by users.

The Multiplication Effect: Seven Orders of Magnitude

Perhaps the most striking calculation in the entire report is Danzig’s analysis of AI’s multiplicative power across four key variables: intelligence, speed, nonstop operations, and coordinated calculating power.

The math is staggering. AI systems operate approximately 4 times human productive hours (8,760 hours per year versus roughly 2,000 for humans) and at approximately 100 times human cognitive speed. Combined, that yields a 400x productivity multiplier. Factor in coordinated operations — the ability to run 100,000 to 1 million instances simultaneously — and the total amplification reaches “more than seven or eight orders of magnitude.”

To put this in context, Danzig offers three historical comparisons:

  • American agricultural productivity increased by roughly 10x over a century
  • The leap from TNT to nuclear weapons represented six orders of magnitude over half a century
  • Transistor production scaled by nine orders of magnitude over 75 years

LLMs are achieving comparable transformation of software production “in a period of a few years (or less).” Danzig provocatively suggests the term “artificial intelligence” — coined 70 years ago — is now a misnomer. Given that AI functions as an instrument of mass production rather than a singular thinking machine, “AI would more aptly stand for automated intelligence.”

The evidence from the coding domain supports this thesis. By early 2025, over 30% of new code at Google was AI-generated. OpenAI’s o3 model scored at the 99.8th percentile on Codeforces — better than all but two out of every thousand human competitive programmers. Amodei predicted that by late 2025, AI would be writing 90% of all code, and within a year, “essentially all of the code.” For organizations grappling with how technology is reshaping operations, these numbers demand immediate attention.

Transform complex reports into interactive experiences your team will actually engage with

Try It Free →

How AI Transforms Cyber Offense: Reconnaissance to Exploitation

Danzig describes the cybersecurity landscape as “a Hobbesian world — a war of all against all unmoderated by any overarching authority or generally effective deterrence.” Within this anarchic environment, the United States finds itself asymmetrically disadvantaged: it “apparently rarely engages in offensive operations and does not allow its citizens to do so,” while Americans and American companies are regularly attacked and frequently held for ransom.

The existing threat landscape is already alarming. As former Deputy National Security Advisor Anne Neuberger documented, Chinese malware has been discovered embedded in water treatment plants in Hawaii and Texas, pipelines in the American heartland, and critical infrastructure systems in allied nations. These capabilities represent an integrated toolkit that could “fundamentally alter the strategic balance in a confrontation, particularly over Taiwan.”

AI supercharges offensive capabilities across three distinct phases:

Reconnaissance

AI systems excel at scanning for vulnerabilities thanks to their ability to locate and assess equipment specifications, manuals, and data — including information secured through nonpublic means — and then analyze relevant code at high speed and low cost. An AI system called XBOW emerged as a leader in the most-watched bug-bounty competition on HackerOne. Danzig’s conclusion is stark: “security by obscurity will be unlikely.”

The most vulnerable targets sit in between heavily-protected major systems and obscure small ones: “important societal functions that rely on less proliferated combinations of software” — the power grid, water systems, and the 8,000 counties that count American votes.

Social Engineering

AI has transformed credential theft and phishing into precision weapons. Swiss researchers found that LLMs “exploit personal information to tailor arguments effectively… far more effectively than humans.” Since ChatGPT’s adoption, social engineering attacks increased 135% and voice phishing surged 260%. In a dramatic case, a Hong Kong clerk was tricked into paying HK$200 million after a video call where every other participant was an AI-generated deepfake.

Exploitation

This phase remains the most challenging for AI. Of approximately 1.66 million US software developers, “perhaps a thousand” possess exploit development skills, and “a handful” are responsible for most successful exploits. Current AI models achieve nearly 100% success on tasks taking humans less than 4 minutes but succeed less than 10% of the time on tasks requiring more than 4 hours — a limitation known as “context saturation.”

Social Engineering in the Age of Deepfakes and LLMs

The transformation of social engineering deserves special attention because it represents the most immediately dangerous intersection of AI and cybersecurity. Traditional phishing relied on crude, often misspelled emails sent to thousands of targets. AI enables what security professionals call “spear phishing at scale” — highly personalized attacks that combine knowledge of the target’s professional role, communication patterns, and social network.

The statistics are sobering. The 135% increase in social engineering attacks and 260% surge in voice phishing (vishing) since ChatGPT’s mainstream adoption reflect not just more attacks but fundamentally more convincing ones. Voice cloning technology, combined with deepfake video, creates attack vectors that were science fiction just three years ago.

The HK$200 million Hong Kong case illustrates the new paradigm: attackers didn’t just impersonate one person — they created an entire fake video conference with multiple AI-generated deepfakes of known colleagues. When the clerk saw and heard what appeared to be familiar faces making familiar requests, traditional verification instincts failed completely.

For enterprises and government agencies alike, this means that training programs focused on recognizing “suspicious emails” are dramatically insufficient. Organizations must implement multi-factor verification for high-value transactions, establish out-of-band confirmation protocols, and invest in AI-powered detection tools that can identify synthetic media. The National Institute of Standards and Technology (NIST) has begun developing frameworks for synthetic media detection, but adoption remains far behind the threat curve.

AI-Powered Cyber Defense: First-Mover Advantage

While the offensive applications capture headlines, Danzig’s analysis of defensive capabilities offers a crucial counterpoint — and a reason for cautious optimism. A DARPA program manager expressed genuine surprise at AI’s defensive potential: “AI systems are capable of not only identifying but also patching vulnerabilities to safeguard the code that underpins critical infrastructure.”

The key insight is that first-mover advantage is substantial in cybersecurity. Defenders who acquire and assimilate AI capabilities first can identify vulnerabilities before attackers discover them, design and deploy patches proactively, detect and remove vulnerable employees from sensitive positions, and establish AI-powered anomaly detection systems that identify intrusions in real time.

Danzig outlines several critical dynamics of the offense-defense balance in the AI era:

  • Leader advantage is useless if not used: The United States has failed to sufficiently harden cyber-physical systems due to “fragmentation of responsibilities, inadequate funding, distorted incentives”
  • New systems benefit most: AI-reviewed and AI-generated code should contain dramatically fewer vulnerabilities than legacy software
  • LLM systems are themselves vulnerable: The tools used for defense can be subverted, misdirected, or simply produce errors
  • Resilience is essential: Since “some vulnerability will likely always persist,” the focus must be on rapid detection and recovery, not just prevention
  • Weakest link problem: Defenders must protect a much larger attack surface than any individual attacker needs to penetrate

A particularly alarming statistic underscores the urgency: “90 percent of the software products used to manage the US electric system contain code ‘contributions’ from Chinese or Russian developers, many with critical vulnerabilities.” This supply chain risk means that even robust defensive AI cannot compensate for fundamentally compromised infrastructure components.

Google’s own assessment of its Gemini 2.5 Pro model found elevated cybersecurity risk, with capabilities exhibiting increased performance on multiple phases of real-world cyber attacks compared to previous models. Researchers noted it was “possible that subsequent revisions in the next few months could lead to a model that reaches significant risk of severe harm.” This candid self-assessment from a major AI developer illustrates how rapidly the threat landscape evolves.

Make critical security research accessible to every stakeholder — turn PDFs into interactive experiences

Get Started →

Five Critical US Government Cybersecurity Failings

The most actionable section of Danzig’s report identifies five specific failings in US government cybersecurity posture. Each represents a structural vulnerability that no amount of technology can compensate for without institutional reform:

1. Failure to Recognize Cybersecurity as Foundational

The Department of Defense prioritizes AI applications for traditional domains — air, sea, land, and space — while treating cyber as a lower-priority domain. This, Danzig argues, “overlooks how the prioritized domains depend on cybersecurity.” In the 2020s, superiority in AI-powered cybersecurity is “central to national power generally and specifically to the ability to fight on land, in the air, on the sea, under the sea, and in space.”

2. Budgetary and Career Disadvantages

Each traditional military domain has a dedicated service fighting for resources and promoting talent. No equivalent service exists for cybersecurity. US Cyber Command operates on approximately one-tenth the budget of the Army, Air Force, or Navy. CISA — the Cybersecurity and Infrastructure Security Agency — functions with only about $3 billion per year and approximately 3,600 people, with the Trump administration actually proposing cuts.

3. Treating AI as Fixed Deliveries

DoD approaches AI as something delivered in defined stages, when in reality AI is “a continuous revolution generating transformative possibilities accelerating on a nearly weekly basis.” Current acquisition processes designed for hardware procurement — multi-year contracts with fixed specifications — are fundamentally mismatched to AI’s pace of evolution.

4. Undervaluing Complementary Inputs

Even advanced AI requires human expertise, high-quality data, and proper incentive structures to be effective. The government must develop much greater capacity to continuously and rapidly assimilate AI capabilities — a challenge that requires investment in people and processes, not just technology.

5. Insufficient Industry Collaboration

Danzig criticizes both the Biden administration’s safety-focused approach and the Trump administration’s laissez-faire stance as inadequate. “The interaction between AI companies and the US government must have a third dimension: partnering to adapt the technology and to prepare national security recipients to meet national needs.” Without pre-release engagement, the government cannot capitalize on first-mover advantages, models arrive without required offensive capabilities, they cannot be integrated into classified systems, and they are not trained on relevant sensitive data.

Structural Reforms: Empowering Cyber Command and Beyond

Danzig’s reform proposals are sweeping but specific. He identifies five variables that must align for success — leadership, organizational structures, processes, human capital, and funding — comparing them to a slot machine: “getting one, two, three, or even four of these variables yields little reward.”

The centerpiece recommendation is to either establish a new military service for cybersecurity or empower US Cyber Command as a junior service, modeled on Joint Special Operations Command. Currently, Cyber Command relies on personnel detailed from other services for 2-4 year rotations — too short to develop deep expertise and institutional knowledge. An empowered Cyber Command would foster its own career paths, training pipelines, organizational culture, and budget authority.

Equally important is Danzig’s proposal to create nonprofit research institutions outside the government but dedicated to its mission, modeled on the Institute for Defense Analyses’ Center for Communications Research. These organizations could attract and retain expert talent with compensation, flexibility, and working environments that government agencies cannot match. Rob Joyce, formerly of the National Security Agency, proposed establishing a new IDA center specifically dedicated to AI research and development for defense applications.

The human capital challenge is acute. When the best AI researchers can earn seven-figure compensation in the private sector, government agencies competing with GS-scale salaries are fighting with one hand tied behind their back. Nonprofit intermediary institutions could bridge this gap while maintaining the mission-oriented focus essential for national security work. For professionals looking to understand the future of work in an AI-driven world, the cybersecurity sector exemplifies both the challenge and the opportunity.

Building Government-AI Company Partnerships

Danzig’s most provocative recommendation concerns the relationship between government and frontier AI companies. He argues that the premise — in practice and, if necessary, by law — should be that “the US government will have opportunities to participate — as a partner, as a beta site, as an investor, or in other ways — in the work of any AI company operating in America or using American technology.”

The preferred approach uses incentives: funding, access to government data, and assistance combating security threats. But Danzig doesn’t shy away from compulsion: failing voluntary cooperation, “a law or regulation might require that the US government have a 120-day period for exclusive use before the release of significant improvements to models unless prior access of at least that duration had been provided.”

This 120-day window would enable the government to fine-tune models on classified, operationally relevant cybersecurity datasets before adversaries gain access to the same capabilities through public release. Existing authorities under the Defense Production Act would likely be sufficient to implement such requirements without new legislation.

Danzig acknowledges legitimate objections: risks of corruption and favoritism, questions about whether lead-time advantages are substantial enough, and concerns that accelerating government model development could collapse the privileged access window as models improve faster. But his rebuttal is compelling: “the governmental capacity for assimilation of AI, built now, will be an invaluable foundation for the future.”

The partnership imperative extends beyond frontier AI companies. Critical infrastructure operators — the private companies running power grids, water systems, and telecommunications — need “incentives, subsidies, guidance, and, in some cases, direction to harden themselves against the cyber storms that are coming.” The adversary isn’t waiting, and neither should the defenders. North Korea alone has stolen more than $6 billion in cryptocurrency over the past decade — a sum so large that no other threat actor compares.

Share this critical analysis with your team — convert it into an engaging interactive experience in seconds

Start Now →

Ten Propositions for the AI and National Security Future

Danzig distills his analysis into ten broader propositions for those concerned with how AI, technology, and human decisions are co-evolving. These transcend the cybersecurity case study to address the fundamental nature of the AI transformation:

  1. AI as automated intelligence: The term “artificial intelligence” understates AI’s nature as an instrument of mass production, amplifying useful intelligence by orders of magnitude rather than creating a single superintelligent entity
  2. Generative AI’s indeterminacy: Because AI systems are grown rather than built, they produce emergent behaviors that resist prediction or precise control — opacity, unintended evolution, and continuous adaptation are inherent features, not bugs
  3. The coding revolution is the catalyst: AI’s transformation of software production is the foundational change from which all other consequences — offensive, defensive, economic, military — flow
  4. Speed of assimilation determines outcomes: The direction of AI’s impact depends far more on who assimilates it fastest than on the intrinsic nature of the technology
  5. First-mover advantage is real but perishable: Early adopters gain substantial benefits, but these advantages erode rapidly as competitors catch up — creating an urgent but narrow window for action
  6. Less fastidious actors may move first: Nations and criminal organizations unconstrained by bureaucratic processes, legal frameworks, and ethical considerations will be “early adapters” using open-source or stolen models
  7. Human competition creates the clearest dangers: The most immediate risks come not from autonomous AI but from human strategies to develop and exploit AI against each other
  8. Complementary investments are essential: AI alone is insufficient — human expertise, quality data, organizational processes, and proper incentives must be developed in parallel
  9. Neither safety-only nor laissez-faire suffices: Effective AI governance requires a third dimension of active partnership between government and industry to meet national needs
  10. The urgency is now: Unlike previous technological revolutions that unfolded over decades, AI is transforming capabilities in years or months — delay is itself a strategic choice with potentially catastrophic consequences

These propositions collectively paint a picture of a world where technological leadership is no longer a comfortable buffer but a constantly contested position. As UC Berkeley researchers confirmed, AI-powered reconnaissance and social engineering are already transforming the cybersecurity battlefield, with exploitation capabilities advancing rapidly behind them. Danzig’s final assessment serves as both warning and call to action: “AI introduces critical changes in armament for battles over software, but it does not end this warfare.”

For policymakers, corporate leaders, and security professionals alike, the message is clear: the fierce urgency of now is not rhetorical flourish but strategic reality. Every month of inaction widens the gap between the pace of AI capability development and the institutional capacity to harness it for defense. The question is no longer whether AI will transform national security — it is whether democratic nations will transform their institutions fast enough to remain secure in an AI-powered world.

Frequently Asked Questions

What is the main argument of RAND’s AI cybersecurity report?

Richard Danzig’s 2025 RAND report argues that AI is fundamentally transforming cybersecurity warfare by automating software production at unprecedented scale. He warns that the US government is critically underprepared, with fragmented responsibilities, inadequate funding, and insufficient collaboration with AI companies, and urges immediate structural reforms to maintain national security.

How does AI amplify cybersecurity threats according to RAND?

AI amplifies threats through three main vectors: reconnaissance (scanning for vulnerabilities at superhuman speed), social engineering (personalized phishing attacks that increased 135% since ChatGPT’s launch), and exploitation (developing attack chains). RAND estimates AI could amplify operationally useful intelligence by more than seven to eight orders of magnitude compared to human capability.

What are the five US government cybersecurity failings identified by RAND?

The five failings are: 1) DoD not recognizing AI will transform cybersecurity foundationally, 2) disadvantaging cybersecurity in budget and career competitions, 3) treating AI as fixed-stage deliveries rather than continuous evolution, 4) undervaluing the human expertise and data needed for effective AI assimilation, and 5) failing to establish close collaboration with frontier AI companies.

What policy reforms does the RAND report recommend for AI and national security?

Key recommendations include empowering US Cyber Command as an independent service with its own career paths and budget, creating nonprofit research centers to attract expert talent, establishing mandatory government pre-release access to major AI models (120-day exclusive use period), training AI on classified cybersecurity datasets, and dramatically increasing CISA’s budget from its current $3 billion per year.

Does AI favor cyber attackers or defenders?

According to RAND, AI offers substantial advantages to whichever side assimilates it first. Defenders who move quickly can identify and patch vulnerabilities, detect anomalies, and harden systems before attackers adapt. However, less fastidious actors like North Korea and criminal groups may adopt AI faster due to fewer bureaucratic constraints. The direction of AI’s effects depends more on speed of assimilation than the nature of the technology itself.

How fast is AI transforming software development and coding?

By early 2025, over 30% of new code at Google was AI-generated. Anthropic CEO Dario Amodei predicted AI would write 90% of code within months and essentially all code within a year. OpenAI’s o3 model scored at the 99.8th percentile on Codeforces, surpassing all but two out of every thousand human competitive programmers. This mass automation of coding directly impacts both offensive and defensive cybersecurity capabilities.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup

Our SaaS platform, AI Ready Media, transforms complex documents and information into engaging video storytelling to broaden reach and deepen engagement. We spotlight overlooked and unread important documents. All interactions seamlessly integrate with your CRM software.