0:00

0:00




Generative AI for DoD Influence Operations: Key Findings from the RAND Report

📌 Key Takeaways

  • Duplicative Procurement: Multiple DoD organizations are acquiring overlapping AI tools for influence activities, wasting resources and creating fragmentation.
  • Beyond Content Creation: Generative AI’s greatest value for influence lies in analysis, planning, and assessment — not just producing propaganda materials.
  • Acquisition Speed Mismatch: Traditional defense procurement cycles are too slow for AI’s rapid development, requiring new flexible acquisition strategies.
  • Adversary Advantage: Russia’s DoppelGänger and China’s Operation Spamouflage demonstrate that adversaries are already deploying AI-enabled influence at scale.
  • No Enterprise Strategy: The DoD lacks a unified plan for generative AI in influence operations, with efforts remaining ad hoc and bottom-up across the joint force.

Why Generative AI Matters for Military Influence Operations

Generative AI for DoD influence operations has emerged as one of the most consequential capability gaps facing the United States military establishment. A landmark RAND Corporation report published in 2025 lays bare the urgent need for the Department of Defense to fundamentally rethink how it acquires, deploys, and sustains artificial intelligence technologies across the influence enterprise. The report, titled Acquiring Generative AI for U.S. Department of Defense Influence Activities, draws on 18 semi-structured interviews with subject-matter experts and a workshop with 24 participants from across DoD influence organizations.

The stakes could hardly be higher. Computing hardware has grown tenfold since 2010, and adversaries like China and Russia have seized upon generative AI to conduct sophisticated influence campaigns at unprecedented scale. Meanwhile, the U.S. military’s approach to acquiring these same technologies remains fragmented, under-resourced, and constrained by procurement processes designed for an earlier era. For defense professionals, policymakers, and technology strategists seeking to understand how AI will reshape information warfare, this RAND analysis provides the most comprehensive assessment to date. To explore the full report interactively, visit our interactive library where complex defense documents become engaging learning experiences.

Generative AI Capabilities Beyond Content Creation

One of the report’s most important findings challenges a widespread misconception within the military establishment. When defense professionals think about generative AI for influence activities, they overwhelmingly focus on content creation — generating text, images, audio, and video for messaging campaigns. The RAND researchers found that this narrow view significantly underestimates the technology’s strategic value.

Generative AI capabilities for influence span three fundamental task categories as defined in the report. The first involves understanding how information impacts the operational environment, including characterizing the overall information environment, obtaining and analyzing publicly available information (PAI), identifying emerging themes, and detecting adversary action. The second category supports human and automated decisionmaking by synchronizing planning, drafting operational products, and analyzing courses of action. The third category addresses execution and assessment — the content creation piece that receives the most attention, but also development of measures of effectiveness (MOEs) and measures of performance (MOPs).

The analysis and planning applications deserve particular emphasis. Generative AI can process vast quantities of multilingual open-source data to map social networks, track sentiment shifts in real time, visualize physical movement flows, and identify emerging narratives before they gain traction. These capabilities transform how influence planners understand target audiences and develop tailored approaches. The Office of the Secretary of Defense has recognized this broader applicability, but implementation remains inconsistent across the force.

Voice cloning, real-time translation, and signature management represent additional capability frontiers. The report notes that generative AI can translate text and voice in near-real time, enabling influence operators to communicate effectively across linguistic boundaries — a critical advantage in coalition operations and multi-cultural theaters of engagement.

Adversary AI Influence Campaigns Driving Urgency

The urgency behind acquiring generative AI for influence operations becomes clear when examining adversary capabilities. The RAND report highlights two prominent examples that illustrate how near-peer competitors have already integrated AI into their information warfare arsenals. Russia’s DoppelGänger campaign demonstrates sophisticated use of AI-generated content to manipulate Western media ecosystems, creating fake news websites and social media personas at industrial scale. China’s Operation Spamouflage represents an equally alarming deployment of automated influence infrastructure targeting audiences across multiple continents and languages.

These are not theoretical threats. Both campaigns have been documented by government agencies, academic researchers, and private-sector threat intelligence firms as actively ongoing. The second-mover disadvantage is real — while adversaries refine their AI-enabled influence capabilities through continuous operational use, the DoD risks falling further behind with each procurement cycle that passes without meaningful AI integration.

The report’s interviewees expressed particular concern that the pace of adversary adoption creates compounding disadvantages. Each generation of AI models brings exponential improvements in output quality, linguistic naturalness, and cultural adaptation. A DoD that takes two to three years to procure and field an AI tool may find that the technology landscape has shifted so dramatically that the capability is already obsolete upon deployment.

Transform complex defense reports into interactive experiences your team will actually read.

Try It Free →

DoD Acquisition Challenges for AI Procurement

The RAND report identifies a constellation of procurement failures that collectively undermine the DoD’s ability to acquire generative AI capabilities effectively. Perhaps the most damaging finding is the pervasive duplication of effort across the influence enterprise. Multiple organizations — spanning the military services, U.S. Special Operations Command (USSOCOM), and U.S. Cyber Command (USCYBERCOM) — are independently acquiring overlapping tools for identical functions such as sentiment analysis, text translation, and target audience analysis.

This fragmentation wastes scarce resources and prevents the kind of interoperability that modern influence operations demand. When different units use different AI tools that cannot share data or integrate workflows, the resulting friction slows operational tempo and creates dangerous intelligence gaps. The report notes that increased coordination and collaboration among influence stakeholders could dramatically improve technology acquisition and sustainment outcomes.

Equally problematic is the limited AI literacy among acquisition professionals. Many contracting officers and program managers lack familiarity with current or emerging AI capabilities, making it difficult to write effective requirements documents, evaluate vendor proposals, or assess whether a commercial off-the-shelf solution meets operational needs. The rapid development cycle of AI technology — faster even than traditional agile software development — exacerbates this knowledge gap.

The report also highlights a troubling cultural dynamic: DoD personnel may be reluctant to share potential AI use cases with leadership out of fear that innovative applications might be shut down. This chilling effect on bottom-up innovation means that some of the most promising applications of generative AI to influence activities never reach decision-makers who could resource them. Understanding these procurement dynamics is essential for anyone working in defense technology and AI policy.

Strategic Acquisition Framework for AI Influence Tools

To address these systemic challenges, the RAND researchers propose a strategic acquisition framework that matches procurement approaches to capability scope. The framework operates along two axes: the level of capability specificity (from broad to highly specialized) and the scale of adoption (from individual unit to DoD-wide deployment).

For broadly applicable AI tools — commercial platforms like large language models that serve general-purpose functions across the enterprise — the report recommends DoD-wide procurement coordinated through the Chief Digital and Artificial Intelligence Office (CDAO). This approach leverages economies of scale and ensures consistent access across organizations. The CDAO’s Task Force Lima, established in August 2023, provides an existing institutional mechanism for this enterprise-level coordination.

For moderately specialized capabilities that serve multiple influence organizations, the framework recommends innovative contracting mechanisms including Other Transaction Authority (OTA), Small Business Innovation Research (SBIR), and Small Business Technology Transfer (STTR) programs. These vehicles offer flexibility by operating outside several federal acquisition regulations that slow traditional procurement.

For highly bespoke tools — capabilities tailored to specific unit requirements and operational contexts — the report endorses in-house development and direct partnerships with AI developers. The Army Special Operations Forces’ Ghost Machine tool exemplifies this approach, representing a capability developed internally to meet unique operational requirements that no commercial product could satisfy. The software acquisition pathway established in 2020 through DoDI 5000.87 provides the policy framework for these rapid, iterative development efforts.

Training and Workforce Readiness for AI Influence Operations

The workforce dimension of generative AI acquisition may be the most underappreciated challenge identified in the RAND report. The researchers found that available training and resources for developing and acquiring AI capabilities are severely lacking across the influence enterprise. Many users are self-teaching on personal computers, an approach that creates security vulnerabilities, inconsistent skill levels, and missed opportunities to develop institutional expertise.

The report identifies knowledge and skill gaps at every level of the organization — from tactical operators who need hands-on proficiency with AI tools, to mid-grade officers who must integrate AI capabilities into operational planning, to senior leaders who need sufficient understanding to make informed resourcing decisions. Decision-makers unfamiliar with AI’s capabilities and limitations in the influence domain cannot effectively prioritize investments or establish appropriate governance frameworks.

RAND recommends a multi-tiered training strategy. In the near term, the DoD should catalogue existing AI training opportunities, provide incentives such as tuition reimbursement for privately offered courses, and create awareness programs that help influence professionals understand what generative AI can and cannot do. The GRAY KNIGHT training exercise at the Special Warfare School and Training Center represents a pioneering effort to integrate generative AI into influence training scenarios, and the report suggests scaling similar approaches across the force.

Longer-term investments should include developing tailored curricula for acquisition professionals, embedding technical AI expertise within procurement teams, and establishing career incentives that attract and retain AI-literate talent within the influence community. Without these workforce investments, even the most sophisticated AI procurement strategy will fail to deliver operational results.

Make defense research accessible — turn dense PDFs into interactive learning experiences.

Get Started →

Enterprise vs. Bespoke AI Development Approaches

One of the most nuanced aspects of the RAND analysis concerns the tension between enterprise-level AI solutions and bespoke, unit-specific development. The report argues persuasively that no single acquisition approach can serve the full range of influence AI requirements — a finding that challenges bureaucratic preferences for standardized, one-size-fits-all procurement.

Enterprise solutions offer clear advantages in cost efficiency, interoperability, and maintainability. When multiple organizations need the same basic capability — such as large-language-model access for text analysis or translation — procuring a single enterprise license makes far more sense than allowing each organization to negotiate separate contracts. The CDAO is the natural home for these enterprise-wide acquisitions, and the report recommends that influence stakeholders actively coordinate with that office to leverage common infrastructure including model repositories and data platforms.

However, the influence mission generates requirements that generic enterprise tools cannot satisfy. Specialized capabilities for narrative intelligence, cultural analysis, target audience segmentation, and adversary campaign attribution require purpose-built models trained on domain-specific data. These bespoke solutions are best developed through partnerships with specialized AI firms or through in-house efforts by technically proficient units. The key insight is that a middle layer of reusable infrastructure — shared computing resources, common data repositories, and standardized APIs — can dramatically reduce the cost and complexity of building specialized tools on top of enterprise foundations.

The report warns against two common failure modes. The first is excessive centralization, where enterprise-level bureaucracy prevents rapid adoption of specialized tools that operators urgently need. The second is excessive fragmentation, where unconstrained bottom-up development creates an unmanageable proliferation of incompatible tools with duplicative costs and no interoperability pathway.

Risk Management in AI-Enabled Influence Activities

The RAND report provides a comprehensive taxonomy of risks associated with deploying generative AI for influence operations, organized into three categories: adoption risks, technical risks, and security concerns. Each category demands specific mitigation strategies that must be built into acquisition planning from the outset.

Adoption risks include the second-mover disadvantage already discussed, along with technological literacy gaps, risk tolerance issues, and the lag between rapidly evolving AI capabilities and the policy frameworks meant to govern their use. The report notes that ethical, legal, and operational considerations create legitimate friction but must be balanced against the operational cost of inaction. Authority and policy lag — where existing governance structures cannot accommodate novel AI applications — represents a particularly persistent barrier.

Technical risks unique to generative AI include hallucinations and inappropriate model output, which pose obvious dangers in influence operations where factual accuracy and cultural sensitivity are paramount. The requirement for expensive GPU hardware creates access barriers, particularly for forward-deployed units operating in austere environments. Limited integration into established military workflows means that even capable AI tools may sit unused because they require parallel processes rather than fitting seamlessly into existing operational rhythms.

Security concerns encompass both traditional cybersecurity vulnerabilities and novel attack vectors specific to AI systems. Training data repositories represent high-value targets for adversary intelligence collection, and model manipulation through adversarial inputs could compromise the integrity of AI-generated analysis or content. The report emphasizes that security measures must enable rather than prevent AI adoption — overly restrictive security postures will simply drive users to unauthorized workarounds that create even greater risk. For a deeper dive into how organizations manage AI risk, explore our interactive defense and technology library.

RAND Recommendations for DoD Generative AI Strategy

The report’s recommendations are organized around three institutional actors, creating a layered governance structure for generative AI acquisition in the influence domain. At the top, the Principal Information Operations Advisor (PIOA) — a position established in 2020 — should direct the Office of Information Operations Policy (OIOP) to foster collaboration and prioritize AI acquisition across the enterprise.

Specific PIOA-level recommendations include defining formal requirements for influence activities across services, USSOCOM, and USCYBERCOM; encouraging investment in generative AI capabilities as an equipping function under Title 10; fostering regular collaboration among influence stakeholders; and coordinating with CDAO to leverage common infrastructure. The report’s expert workshop demonstrated that bringing diverse influence organizations together produces insights and synergies that siloed approaches cannot achieve.

At the service level, the report recommends identifying appropriate organizations to manage AI acquisition, implementing formal processes to define capability requirements (focusing on requirements rather than specific tools), increasing the tempo of capability purchases and reassessment cycles, and developing coordinated sustainment strategies. The emphasis on faster reassessment cycles is particularly important — the report argues that limited incentives exist for AI developers to improve or mature tools after initial acquisition, and more-frequent competitive reviews would drive continuous vendor innovation.

Finally, PIOA and OIOP should develop adoption guidance that provides opportunities and resources rather than imposing restrictive security measures. This includes investing in AI training at all organizational levels and developing flexible guidelines — guardrails rather than barriers — that govern AI-generated output in influence activities while accommodating rapidly changing conditions and the need for interoperability with allied and partner nations.

Implications for Allied Interoperability and Future AI Partnerships

The RAND report’s final major theme concerns the coalition dimension of AI-enabled influence operations. As generative AI capabilities proliferate across the DoD, the need to interoperate with partners and allies increases proportionally. Influence operations are rarely conducted unilaterally — they typically involve coordination with allied militaries, intelligence agencies, and diplomatic establishments, each operating under different legal frameworks and classification systems.

This interoperability requirement adds significant complexity to acquisition planning. AI tools acquired for influence must be designed with coalition use in mind, incorporating features like multi-classification-level operation, standardized data sharing protocols, and configurable content approval workflows that accommodate different national legal requirements. The report suggests that early engagement with key allies during the requirements definition phase can prevent costly retrofit requirements later.

The broader strategic implications extend beyond military operations. Generative AI is reshaping the global information environment in ways that affect diplomacy, economic competition, and social stability. The DoD’s approach to acquiring and deploying these technologies will influence how democratic nations collectively respond to authoritarian information campaigns. Getting this right — building capable, interoperable, and responsibly governed AI influence tools — is not merely a defense procurement challenge but a strategic imperative for the rules-based international order.

The complete RAND report contains detailed acquisition matrices, capability taxonomies, and implementation roadmaps that defense professionals will find invaluable. The analysis represents the kind of complex, data-rich policy research that benefits enormously from interactive presentation formats — where readers can explore findings at their own pace, focus on the sections most relevant to their roles, and engage with the material rather than passively consuming it.

Turn RAND reports and defense PDFs into experiences your stakeholders will actually engage with.

Start Now →

Frequently Asked Questions

What is the RAND report on generative AI for DoD influence operations about?

The RAND report (RR-A3157-1) examines how the U.S. Department of Defense can acquire and deploy generative AI technologies for influence activities, including psychological operations, military information support operations, and information warfare. It identifies key procurement challenges, capability gaps, and provides strategic recommendations for acquisition reform.

Why does the DoD need generative AI for influence activities?

Adversaries like China and Russia are already using AI-enabled influence campaigns such as Operation Spamouflage and DoppelGänger. The DoD needs generative AI to match and counter these threats, improve operational environment analysis, accelerate planning cycles, create tailored messaging at scale, and conduct real-time sentiment analysis across multiple languages.

What are the main challenges in acquiring AI for military influence operations?

Key challenges include duplicative procurement across organizations, limited AI literacy among acquisition professionals, policy and authority gaps, an acquisition cycle too slow for rapidly evolving AI technology, lack of enterprise-wide strategy, insufficient computing infrastructure, and security concerns around training data and model vulnerabilities.

What acquisition approaches does RAND recommend for generative AI?

RAND recommends a portfolio approach matching acquisition strategy to capability scope: enterprise-level procurement through CDAO for broadly applicable tools like ChatGPT, innovative contracting mechanisms (OTA, SBIR, STTR) for specialized capabilities, partnerships with AI developers, and in-house development for highly bespoke unit-level tools like Ghost Machine.

How can generative AI improve military influence planning and assessment?

Generative AI can characterize information environments by processing publicly available information at scale, map social networks, identify emerging narratives, support course-of-action development, translate text and voice in near-real time, develop measures of performance and effectiveness, and help assess influence campaign outcomes — going far beyond simple content generation.

What role does the CDAO play in DoD AI acquisition for influence?

The Chief Digital and Artificial Intelligence Office (CDAO) manages enterprise-level AI initiatives including Task Force Lima, established in August 2023. RAND recommends that influence stakeholders coordinate with CDAO to leverage shared infrastructure, model repositories, and common platforms while maintaining the flexibility to acquire specialized influence-specific tools independently.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup