America’s AI Action Plan 2025: White House Strategy for Global AI Dominance
Table of Contents
- What Is America’s AI Action Plan? Overview of the White House’s July 2025 AI Strategy
- How the U.S. Plans to Accelerate AI Innovation Through Deregulation
- AI and Free Speech: Eliminating ‘Ideological Bias’ in AI Systems
- Open-Source AI Strategy: Why the U.S. Is Betting on Open-Weight Models
- The AI Workforce Plan: How the White House Aims to Protect American Workers
- Building AI Infrastructure: Data Centers, Energy, and the ‘Build, Baby, Build’ Agenda
- Restoring American Semiconductor Manufacturing Under the AI Action Plan
- AI in National Defense: The Pentagon’s New AI Adoption Roadmap
- AI and National Security: How the U.S. Plans to Evaluate Frontier Model Risks
- International AI Diplomacy: Export Strategy, Alliances, and Countering China
- Combating Deepfakes and Synthetic Media: New Legal and Technical Frameworks
- What’s Missing from America’s AI Action Plan — Critical Gaps and Open Questions
📌 Key Takeaways
- Deregulation First: The plan prioritizes removing regulatory barriers at federal and state levels, explicitly reversing Biden’s EO 14110 and threatening to withhold federal AI funding from states with burdensome regulations.
- China as Primary Threat: China is repeatedly named as the strategic AI competitor, with calls for evaluating Chinese AI models for CCP alignment and countering Chinese influence in international governance bodies.
- Energy Infrastructure Crisis: The plan acknowledges U.S. energy capacity has stagnated since the 1970s and identifies this as a critical barrier to AI development, explicitly rejecting “radical climate dogma.”
- Free Speech Reframing: AI “safety” is redefined away from misinformation and DEI concerns toward ideological bias, with plans to remove references to misinformation, DEI, and climate change from federal AI frameworks.
- Open-Source Geostrategy: Open-source and open-weight AI models are positioned as tools for establishing American AI as the global standard, competing directly against Chinese alternatives.
What Is America’s AI Action Plan? Overview of the White House’s July 2025 AI Strategy
In July 2025, the White House released a comprehensive 28-page strategy document that frames artificial intelligence development as America’s new “space race”—one that the United States must win to maintain global technological, economic, and military dominance. Signed by Michael J. Kratsios (Office of Science and Technology Policy), David O. Sacks (AI/Crypto Advisor), and Marco A. Rubio (National Security Advisor), America’s AI Action Plan represents the most significant shift in U.S. AI policy since the inception of federal AI governance frameworks.
The plan emerges from Executive Order 14179, signed January 23, 2025, titled “Removing Barriers to American Leadership in AI.” Unlike previous administrations that emphasized caution and regulation, this strategy explicitly prioritizes speed and deregulation as America’s competitive advantages. The document serves as both a direct repudiation of the Biden administration’s approach—specifically rescinding Executive Order 14110—and a comprehensive roadmap for what the authors describe as ensuring “American values” dominate global AI development.
Built around three foundational pillars, the plan outlines over 70 specific policy actions across 30 distinct initiatives involving 13+ federal agencies. The framework represents a philosophical shift from AI safety and ethics as traditionally understood to a focus on “free speech” and “objective truth” in AI systems, while positioning China not just as a competitor but as an existential threat to American technological supremacy.
What makes this plan particularly significant is its constitutional language—many provisions are written with imperative directives that suggest these policies will be implemented regardless of congressional action or state-level resistance. The plan even proposes using federal funding as leverage to compel state compliance with its deregulatory vision, a strategy that experts suggest could fundamentally reshape federal-state relationships in technology governance.
How the U.S. Plans to Accelerate AI Innovation Through Deregulation
Pillar I of America’s AI Action Plan centers on what the authors call “removing red tape and onerous regulation”—a systematic dismantling of existing AI oversight mechanisms at both federal and state levels. The plan’s first major initiative directs the Office of Science and Technology Policy (OSTP) to conduct a comprehensive Request for Information on federal regulations that allegedly hinder AI innovation, while simultaneously instructing the Office of Management and Budget (OMB) to review all federal agency regulations, rules, and guidance documents for potential elimination or modification.
Perhaps most controversially, the plan proposes conditioning federal AI funding on state regulatory climate. This provision would require federal agencies to “consider the regulatory climate for AI” when making funding decisions, effectively creating a financial incentive system that could pressure states to roll back their own AI governance frameworks. Legal experts have noted this represents an unprecedented use of federal spending power to influence state technology policy, potentially setting a precedent for future federal intervention in emerging technology regulation.
The plan also directs the Federal Trade Commission (FTC) to review prior AI-related investigations to determine whether they “unduly burden AI innovation.” This review explicitly questions whether previous enforcement actions against major technology companies may have stifled development, suggesting a fundamental shift toward more permissive regulatory interpretation. Simultaneously, the Federal Communications Commission (FCC) is tasked with evaluating whether state AI regulations violate the Communications Act, potentially creating grounds for federal preemption of state law.
The deregulatory approach extends to government procurement, with new guidelines requiring federal agencies to prioritize AI systems that promote “free speech” and “objective truth” over existing diversity, equity, and inclusion considerations. This represents a significant departure from current federal AI procurement guidelines, which emphasize bias mitigation and fairness considerations as primary evaluation criteria.
AI and Free Speech: Eliminating ‘Ideological Bias’ in AI Systems
The plan’s most philosophically significant shift involves redefining AI “safety” away from traditional concerns about misinformation, bias, and harmful content toward what it characterizes as “ideological bias” in AI systems. This reframing positions mainstream content moderation and bias mitigation efforts not as safety measures, but as forms of censorship that suppress “American values” and “objective truth.”
Central to this approach is a comprehensive revision of the National Institute of Standards and Technology (NIST) AI Risk Management Framework. The plan explicitly calls for removing references to misinformation, diversity equity and inclusion (DEI), and climate change from this foundational document—language that currently guides AI development across government and much of the private sector. Instead, the revised framework will emphasize “protecting free speech” and ensuring AI systems align with “constitutional principles.”
The plan directs NIST to develop new evaluation methodologies for identifying what it calls “Chinese Communist Party (CCP) alignment” in AI models. This includes creating assessment tools that can detect whether AI systems have been “trained or fine-tuned to promote ideologies that are antithetical to American values.” The implications of this directive extend far beyond Chinese-developed models—it suggests federal evaluation of all AI systems for ideological compliance, potentially including models developed by American companies with global training datasets.
Federal procurement guidelines under the new framework will require agencies to prioritize AI systems that demonstrate “objective truth” capabilities, though the plan provides no specific methodology for determining objectivity. This standard will apply to all “frontier LLMs” (large language models) purchased by federal agencies, potentially affecting billions of dollars in government AI contracts. The current NIST AI Risk Management Framework emphasizes fairness, accountability, and transparency—principles that the plan suggests may be incompatible with its “objective truth” mandate.
Transform government documents and policy papers into interactive experiences that stakeholders actually engage with.
Open-Source AI Strategy: Why the U.S. Is Betting on Open-Weight Models
America’s AI Action Plan positions open-source and open-weight AI models not merely as innovation drivers, but as geostrategic assets in the competition against China. The plan explicitly states that widespread adoption of American-developed open models can “establish American AI as the global standard” while reducing dependence on Chinese alternatives—a strategy that mirrors historical approaches to technology standard-setting during the Cold War.
The National AI Research Resource (NAIRR) pilot represents the plan’s flagship initiative for democratizing AI compute access. Unlike previous proposals that emphasized academic research, the expanded NAIRR will prioritize startup access and commercial development, with specific provisions for creating “spot and forward markets” for compute resources. This market-based approach aims to reduce barriers for small and medium-sized businesses while building a domestic AI ecosystem that can compete globally.
The plan directs the National Telecommunications and Information Administration (NTIA) to convene stakeholders focused on small and medium business AI adoption, with particular emphasis on sectors where Chinese AI models currently dominate. This includes developing “adoption playbooks” for industries like manufacturing, logistics, and customer service—areas where companies like DeepSeek and other Chinese AI developers have gained significant market share.
Financial market development represents another critical component of the open-source strategy. The plan calls for establishing regulatory frameworks that support “compute futures” and other financial instruments tied to AI infrastructure capacity. This approach aims to create market mechanisms that can respond rapidly to geopolitical disruptions in the AI supply chain, particularly scenarios where Chinese-controlled compute resources might become unavailable to American companies.
The strategy also includes provisions for “open-source diplomatic initiatives”—programs designed to encourage allied nations to adopt American-developed open models over Chinese alternatives. This includes technical assistance, training programs, and potentially preferential trade treatment for countries that align with U.S. open-source AI standards. The approach recognizes that the global AI ecosystem is increasingly characterized by competing technological standards rather than simply competing companies.
The AI Workforce Plan: How the White House Aims to Protect American Workers
Despite repeated “worker-first” rhetoric throughout the document, America’s AI Action Plan offers remarkably few concrete protections against AI-driven job displacement. Instead, the workforce strategy focuses heavily on retraining, skills development, and tax incentives—a market-oriented approach that places primary responsibility for adaptation on individual workers rather than implementing systemic safeguards.
The centerpiece workforce initiative establishes an AI Workforce Research Hub under the Department of Labor, tasked with studying AI’s impact on employment patterns and developing “rapid response” programs for displaced workers. However, the plan provides no specific funding commitments or timelines for these programs, instead directing agencies to “identify existing resources” and “leverage current authorities” to support displaced workers.
Tax policy represents the administration’s primary tool for encouraging worker adaptation. The plan directs the Internal Revenue Service (IRS) to provide guidance on Section 132 provisions, which would allow employers to provide tax-free AI training and reskilling programs to employees. While potentially valuable, this approach depends entirely on employer initiative and provides no recourse for workers whose employers choose not to offer such programs.
The plan emphasizes AI infrastructure workforce development—electricians, HVAC technicians, data center specialists, and other trades essential to the massive physical infrastructure buildout the strategy envisions. New Registered Apprenticeship programs will focus on these occupations, with expanded Career and Technical Education (CTE) programs designed to prepare workers for AI infrastructure roles. The Department of Education is directed to update curriculum standards to include AI-relevant technical skills.
Monitoring and research provisions include expanded Bureau of Labor Statistics (BLS) and Census Bureau studies of AI’s labor market impacts, with quarterly reporting requirements for sectors with high AI adoption rates. The plan also directs the Department of Labor to establish “rapid retraining funding” guidelines under the Workforce Innovation and Opportunity Act (WIOA), though no additional funding is allocated for these programs. Studies from institutions like Brookings Institution suggest that retraining alone may be insufficient to address the scale of potential AI displacement.
Building AI Infrastructure: Data Centers, Energy, and the ‘Build, Baby, Build’ Agenda
Pillar II of the plan acknowledges what many consider America’s greatest vulnerability in the AI competition: energy infrastructure capacity. The document states bluntly that “U.S. energy capacity has stagnated since the 1970s” while China has rapidly expanded its electrical grid capacity, creating a fundamental bottleneck for AI development that could determine global competitiveness.
The infrastructure strategy centers on dramatic regulatory streamlining for data centers and energy projects. National Environmental Policy Act (NEPA) categorical exclusions will apply to AI data centers, effectively exempting them from traditional environmental review processes. The plan expands the FAST-41 permitting process to include data centers and energy infrastructure, with mandatory 18-month approval timelines and federal preemption of state delays.
Federal lands represent a massive untapped resource in this strategy. The plan makes federal property available for data center construction and power generation, with particular emphasis on sites near existing electrical transmission infrastructure. Clean Water Act Section 404 will include new nationwide permits specifically for data center construction, while Clean Air Act requirements will be streamlined for AI infrastructure projects.
The energy strategy embraces what the plan calls “all-of-the-above” generation sources, including geothermal, nuclear fission, and nuclear fusion development. The plan explicitly rejects what it terms “radical climate dogma” as an obstacle to rapid infrastructure development, suggesting environmental considerations will be subordinated to national security imperatives in AI infrastructure decisions.
Security considerations pervade the infrastructure approach. The plan prohibits adversary technology in AI infrastructure, with specific restrictions on Chinese-manufactured components in data centers and electrical systems. New high-security data centers will be built specifically for military and intelligence community use, with priority access agreements ensuring government AI workloads receive computing resources during national emergencies. The Department of Energy’s PermitAI project will be expanded across federal agencies to automate infrastructure permitting processes. Research from Department of Energy studies indicates that AI data centers could consume up to 20% of U.S. electricity generation by 2030.
Convert complex policy documents and infrastructure reports into engaging, interactive presentations your audience will actually read.
Restoring American Semiconductor Manufacturing Under the AI Action Plan
The plan positions semiconductor manufacturing as the foundation of AI competitiveness, building on the CHIPS and Science Act while adding new requirements focused specifically on AI applications. The CHIPS Program Office will be restructured to prioritize AI-optimized chip manufacturing, with streamlined approval processes and new performance metrics tied to AI computational capabilities rather than general semiconductor output.
Export controls represent a critical component of the semiconductor strategy. The plan calls for new controls on semiconductor manufacturing sub-systems—the specialized equipment used to produce advanced chips. This represents an expansion beyond current controls, which focus primarily on finished semiconductor products, to include the entire manufacturing ecosystem. The goal is to prevent China from developing indigenous advanced semiconductor capabilities while maintaining American dominance in AI chip production.
The plan emphasizes America’s “near-monopoly leverage” in critical parts of the semiconductor supply chain, particularly in specialized manufacturing equipment and design software. New export controls will target these chokepoint technologies, with particular focus on equipment capable of producing chips suitable for AI training and inference workloads. The approach acknowledges that control over semiconductor manufacturing equipment may be more effective than chip export controls alone.
Domestic production incentives include expedited environmental reviews for semiconductor manufacturing facilities and priority access to federal lands for facility construction. The plan also directs agencies to “plug loopholes” in existing semiconductor export controls, suggesting current restrictions have been insufficiently enforced or contain workarounds that allow technology transfer to China.
Integration with AI development represents a significant shift from traditional semiconductor policy. The plan requires CHIPS Act recipients to demonstrate how their production capabilities will support AI applications, with specific quotas for AI-optimized chip production. This approach recognizes that general semiconductor manufacturing capacity may be less strategically valuable than specialized AI chip production in the current technological competition. Industry analysis from Semiconductor Industry Association indicates that AI chip demand could represent 50% of global semiconductor market value by 2030.
AI in National Defense: The Pentagon’s New AI Adoption Roadmap
The defense implications of America’s AI Action Plan extend far beyond traditional military applications to encompass what the authors describe as a fundamental transformation of how the Department of Defense operates. The plan mandates that DoD workflow automation become a priority across all service branches, with specific timelines for implementing AI systems in logistics, personnel management, and operational planning.
A new AI & Autonomous Systems Virtual Proving Ground will be established within DoD, designed to rapidly test and deploy AI capabilities across military scenarios. Unlike traditional defense procurement timelines that can span decades, this proving ground will operate on 6-month development cycles, with successful systems fast-tracked for immediate deployment. The approach acknowledges that AI development timelines are fundamentally incompatible with traditional defense acquisition processes.
Priority compute access represents a critical national security provision. The plan establishes agreements with cloud providers ensuring that government AI workloads receive computing resources during national emergencies, potentially displacing commercial users during crisis situations. These “priority access agreements” recognize that AI capabilities could be decisive in future conflicts, requiring guaranteed access to computational resources regardless of market conditions.
Senior Military Colleges will be designated as AI research and education hubs, creating a pipeline for military personnel trained in AI applications. This includes new degree programs, officer education requirements, and enlisted specialty ratings focused on AI systems operation and maintenance. The approach aims to create an AI-literate military workforce capable of operating in increasingly automated combat environments.
High-security data centers specifically designed for classified AI workloads represent a significant infrastructure investment. These facilities will handle intelligence processing, operational planning, and weapons systems integration—all requiring security clearances and specialized physical protection. The plan acknowledges that current commercial cloud infrastructure, while suitable for many government applications, cannot meet the security requirements for sensitive military AI operations. Joint DoD-Intelligence Community AI adoption assessments will evaluate foreign AI systems for potential security threats, including backdoors, data exfiltration, and adversarial manipulation capabilities. Pentagon research from Department of Defense AI studies suggests AI could provide decisive advantages in future military operations.
AI and National Security: How the U.S. Plans to Evaluate Frontier Model Risks
The plan establishes comprehensive frameworks for evaluating AI systems that could pose national security risks, with particular emphasis on “frontier models” capable of advanced reasoning, planning, and potentially dangerous capabilities. The Center for AI Safety Information and Security (CAISI) will lead evaluations for Chemical, Biological, Radiological, Nuclear, and Explosive (CBRNE) risks, as well as cybersecurity threats posed by advanced AI systems.
Assessment of adversary AI systems represents a new intelligence priority. The plan directs intelligence agencies to evaluate foreign AI models for backdoors, malicious behavior, and potential use in influence operations. This includes developing technical capabilities to analyze AI training processes, identify data sources, and detect potential manipulation or control mechanisms embedded in foreign models.
Biosecurity requirements represent one of the plan’s most specific regulatory provisions. Nucleic acid sequence screening becomes mandatory for all federally funded research, with data sharing mechanisms established between nucleic acid synthesis providers. This provision acknowledges concerns that AI systems could be used to design dangerous biological agents, requiring screening systems to identify potentially harmful DNA and RNA sequences.
Intelligence collection on foreign AI development becomes a formal priority, with specific focus on Chinese frontier AI projects. The plan directs agencies to recruit top AI researchers to federal service, recognizing that evaluating advanced AI systems requires expertise that may not currently exist within government. This includes new hiring authorities and compensation packages designed to compete with private sector AI salaries.
Risk evaluation frameworks will be updated quarterly to address rapidly evolving AI capabilities. The plan acknowledges that current risk assessment methodologies may become obsolete as AI capabilities advance, requiring flexible frameworks that can adapt to new threat vectors. International coordination with allied intelligence services will focus on sharing information about AI risks and coordinating responses to threatening foreign AI developments. Research institutions like RAND Corporation have documented significant challenges in evaluating AI national security risks due to the pace of technological change.
International AI Diplomacy: Export Strategy, Alliances, and Countering China
Pillar III of the plan positions international AI diplomacy as essential to maintaining American technological dominance, with China explicitly identified as the primary strategic competitor across multiple domains. The strategy goes beyond traditional trade policy to encompass what the authors describe as “technology diplomacy”—using AI capabilities and partnerships as tools of geopolitical influence.
A full-stack AI export program through the Department of Commerce will work with industry consortia to promote American AI technologies abroad. This program encompasses not just AI models and applications, but the entire technological ecosystem: chips, software, infrastructure, and technical expertise. The approach recognizes that AI competitiveness requires controlling the entire technology stack rather than individual components.
Countering Chinese influence in international governance represents a diplomatic priority across multiple organizations. The plan calls for strategic engagement in the United Nations, OECD, G7, G20, International Telecommunication Union (ITU), and ICANN to ensure AI governance standards align with American interests rather than Chinese preferences. This includes funding alternative technical standards organizations and supporting allied countries’ participation in AI governance discussions.
Export control enforcement receives significant new authorities and technologies. Location verification features will be required on advanced AI compute chips, allowing real-time monitoring of where American AI hardware is being used. The Foreign Direct Product Rule and secondary tariffs become tools for enforcing AI technology restrictions, potentially cutting off countries or companies that violate export controls from the broader American technology ecosystem.
The strategy emphasizes plurilateral controls—agreements with allied nations to coordinate AI export restrictions—over multilateral treaty approaches that might include China. This “coalition of the willing” approach aims to create technological alliances that can move more quickly than traditional international organizations while maintaining sufficient market power to influence global AI development patterns.
Technology diplomacy strategic planning includes creating an “AI global alliance” of countries committed to what the plan calls “democratic AI governance principles.” This alliance would coordinate research funding, share technical expertise, and potentially provide collective security for AI infrastructure against cyber attacks or other disruptions. Analysis from Council on Foreign Relations suggests that AI diplomacy could become as significant as traditional military alliances in determining geopolitical influence.
Transform dense policy documents and strategic reports into interactive experiences that communicate complex information effectively.
Combating Deepfakes and Synthetic Media: New Legal and Technical Frameworks
The plan addresses synthetic media and deepfakes through both technological and legal mechanisms, acknowledging that AI-generated content poses significant challenges to legal systems, democratic processes, and social trust. The recently enacted TAKE IT DOWN Act (Public Law No. 119-12) provides the legal foundation, but implementation requires new technical standards and enforcement mechanisms.
The NIST Guardians of Forensic Evidence program will be formalized as federal guideline, establishing technical standards for detecting AI-generated content in legal proceedings. This includes developing authentication technologies, training law enforcement personnel, and creating chains of custody procedures for digital evidence in an era where any media could potentially be AI-generated.
Federal Rules of Evidence Rule 901(c) will be updated to address AI-generated content, with the Department of Justice developing specific guidance on deepfake authentication standards for federal agency adjudications. This represents a significant evolution in legal evidence standards, acknowledging that traditional authentication methods may be insufficient for AI-generated content.
The technical framework emphasizes real-time detection capabilities rather than post-hoc analysis. This includes developing AI systems that can identify synthetic content as it’s being created or distributed, potentially preventing harmful deepfakes from spreading before they can cause damage. The approach recognizes that traditional content moderation strategies may be too slow to address AI-generated misinformation campaigns.
Court system implications extend beyond federal proceedings to state and local jurisdictions. The plan includes funding for state court systems to upgrade technical capabilities and train personnel on AI-generated evidence issues. This comprehensive approach acknowledges that synthetic media challenges affect all levels of the legal system, not just federal proceedings. Legal scholars at institutions like Harvard Law School have noted that deepfake technology could fundamentally challenge how courts evaluate evidence and witness testimony.
What’s Missing from America’s AI Action Plan — Critical Gaps and Open Questions
Despite its comprehensive scope, America’s AI Action Plan contains significant gaps that experts suggest could undermine its effectiveness or create unintended consequences. Perhaps most notably, the 28-page document includes no specific budget figures, spending commitments, or quantitative performance benchmarks—a striking omission for a strategy that proposes massive infrastructure investments and regulatory overhauls.
Privacy protection receives minimal attention despite AI’s growing impact on personal data collection and processing. While the plan mentions privacy considerations in passing, it provides no specific protections for individuals or mechanisms for addressing AI systems that may violate existing privacy laws. This gap is particularly significant given the plan’s emphasis on removing regulatory barriers that might include existing privacy protections.
The tension between “worker-first” rhetoric and minimal displacement safeguards represents perhaps the plan’s most significant internal contradiction. While repeatedly emphasizing worker protection, the actual policy mechanisms focus almost exclusively on market-based solutions—retraining programs, tax incentives, and skills development—with no direct protections against AI-driven job displacement or income support for displaced workers.
Consumer protection implications of widespread deregulation remain largely unaddressed. The plan’s emphasis on removing regulatory barriers could affect consumer safety standards, product liability frameworks, and recourse mechanisms for individuals harmed by AI systems. The document provides no analysis of how deregulation might impact consumer rights or safety.
Environmental impact assessment is notably absent from the infrastructure discussion. Despite acknowledging massive data center and energy infrastructure buildout, the plan provides no analysis of environmental consequences or sustainability considerations. The explicit rejection of “climate dogma” suggests environmental impacts may be considered secondary to competitive considerations.
Algorithmic accountability mechanisms are entirely missing from the governance framework. The plan provides no provisions for auditing AI systems, requiring transparency in AI decision-making, or enabling appeals processes for individuals affected by automated decisions. This represents a significant departure from existing AI governance frameworks that emphasize accountability and transparency.
The federal-state regulatory tension remains unresolved throughout the document. While proposing to use federal funding to influence state AI policies, the plan provides no clear framework for resolving conflicts between federal directives and state sovereignty in technology governance. Constitutional scholars suggest this approach could face significant legal challenges, potentially undermining the plan’s implementation timeline and effectiveness.
Frequently Asked Questions
What is America’s AI Action Plan 2025?
America’s AI Action Plan is the White House’s comprehensive strategy document published in July 2025 that outlines how the United States will maintain AI leadership globally through three pillars: accelerating innovation, building infrastructure, and leading international diplomacy. It emphasizes deregulation, countering China, and establishing “American values” in AI development.
How does the AI Action Plan differ from Biden’s AI policies?
The plan explicitly reverses Biden’s Executive Order 14110, prioritizing deregulation over safety regulations. It removes focus on misinformation, DEI, and climate change from the NIST AI Risk Management Framework, instead emphasizing free speech and what it calls “objective truth” in AI systems while positioning China as the primary strategic threat.
What are the three pillars of America’s AI Action Plan?
Pillar I: Accelerate AI Innovation (15 subsections covering deregulation, open-source AI, workforce development, and government adoption). Pillar II: Build American AI Infrastructure (8 subsections on data centers, energy, semiconductors, and cybersecurity). Pillar III: Lead in International AI Diplomacy and Security (7 subsections on exports, countering China, and biosecurity).
How does the plan address China as an AI competitor?
The plan explicitly names China as the primary AI adversary and calls for evaluating Chinese AI models for CCP alignment, countering Chinese influence in international governance bodies (UN, OECD, G7, G20), strengthening export controls on semiconductors, and establishing American AI as the global standard through open-source models and international partnerships.
What specific policies does the plan propose for AI infrastructure?
The plan proposes NEPA categorical exclusions for data centers, expanding the FAST-41 permitting process, making federal lands available for AI infrastructure, streamlining Clean Water and Clean Air Act requirements, developing grid stabilization strategies, and creating high-security data centers for military and intelligence use with priority compute access during emergencies.