Acquiring Generative Artificial Intelligence for U.S. Department of Defense Influence Activities
Research SummaryPublished Jul 22, 2025
Research SummaryPublished Jul 22, 2025
Photo by Matt Britton/U.S. Army.
Bottom line up front: Generative AI can improve analysis, operational planning, and assessment of influence activities, but it is a tool, not the answer, and maximizing its potential will take dedicated effort in several areas.
The integration of generative artificial intelligence (AI) into influence activities, as with all uses of AI, presents enormous opportunities for scaling and automation of tasks. As strategic competition intensifies, particularly with China and Russia, generative AI presents a crucial tool for helping the U.S. military process vast amounts of data and produce high-quality content more efficiently. However, ad hoc efforts of the U.S. Department of Defense (DoD) to acquire, develop, and operationalize generative AI capabilities have failed to address fundamental questions about identifying needed capabilities; acquiring them efficiently; and ensuring knowledge and training among both decisionmakers and end users, particularly for conducting influence activities. To gain insights into current and potential practices for acquiring and employing AI for influence-related activities, RAND researchers interviewed experts and conducted a workshop to elicit their tactical and operational needs.
What is meant by influence or influence activity? In brief, an influence activity is a deliberate attempt to affect a person’s or group’s thoughts, feelings, or behavior. According to Joint Publication 3-04, Information in Joint Operations, published in 2022, “The joint force leverages information to affect the perceptions, attitudes, decision making, and behavior of relevant actors” (p. ix). Joint Publication 3-04 also states, “Relevant actors include individuals, groups, populations, or automated systems whose capabilities or behaviors have the potential to affect the success of a particular campaign, operation, or tactical action” (p. ix). Both Russia and the People’s Republic of China are known to have waged recent, prominent influence campaigns against the European Union and the U.S. media using generative AI (with the intent of disseminating disinformation and disrupting elections; see the red box on p. 7, which describes Russia’s DoppelGänger campaign and China’s Operation Spamouflage). The ability not only to detect and defend against such campaigns but also to mount such campaigns where appropriate has made generative AI a critical capability for the military and the Intelligence Community.
Title 10, Section 397, of the U.S. Code establishes a Principal Information Operations Advisor (PIOA) to advise the Secretary of Defense on all aspects of information operations within DoD. PIOA oversees policy, strategy, planning, resource management, operations, personnel, and technology for information operations and ensures coordination with the Department of State, the Intelligence Community, and other federal agencies. PIOA also manages risk to prevent U.S. persons from being exposed to information meant for foreign audiences, sets standards for acknowledging operations, and fosters collaboration with the private sector and academia on countering malign influence activities.
In October 2020, the Secretary of Defense designated the Under Secretary of Defense for Policy (USD[P]) as PIOA. To support the assigned responsibilities, the USD(P) formed the Office of Information Operations Policy (OIOP) and established a PIOA cross-functional team with representatives from each of the services. The PIOA crossfunctional team studied the 2022 National Defense Strategy of the United States of America and Joint Publication 3-04 (Information in Joint Operations) and collected inputs from multiple information forces’ strategies and foundational documents. The team did this to align the 2023 DoD Strategy for Operations in the Information Environment with the 2022 National Defense Strategy and focus on building DoD capabilities and capacities to execute operations in the information environment in support of integrated deterrence, campaigning, and building enduring advantages—approaches believed to be needed to advance U.S. national defense priorities and defend and promote national interests.
However, despite these efforts, coordination across the information and influence communities remains challenging—from the lexicon, through bureaucratic roles and responsibilities, to operational execution. This environment does not positively facilitate the provision of clear guidance, prioritization, and resources required for effective, efficient, and dynamic acquisition and development of AI capabilities and tools to conduct influence activities.
Generative AI offers information personnel the potential to analyze large volumes of data and to generate highquality content far more efficiently than with the tools they currently possess. AI-enabled technology does not necessarily introduce novel capabilities but rather force-multiplies a host of existing capabilities. That is, influence professionals already have the capability to craft messages and narratives, conduct audience analyses, counter adversary narratives, and mount their campaigns on social media. But, as noted in a 2023 report by the U.S. Government Accountability Office called Contested Information Environment: Actions Needed to Strengthen Education and Training for DOD Leaders, DoD lacks the resources needed to train service members and decisionmakers to operate in the information environment, particularly in an environment of contested information, and the resources needed to manage information. Acquiring AI capabilities for influence is about improving on these capabilities—for example, increasing operational tempo, improving quality (and, thus, influence), and scaling up influence campaigns. But although the potential benefits for adopting generative AI for influence are tremendous, they do not come without risks.
In Joint Publication 3-04, the Joint Chiefs of Staff identified three sets of tasks for which they envisioned generative AI playing a role:
AI can be a vital resource in managing and analyzing the large amounts of data needed to conduct influence operations. Likewise, generative AI can be enormously helpful in creating and disseminating the output of all of that analysis. But these actions require tremendous computing capacity and extensive training of operators and decisionmakers, and both computing and training resources are in short supply (for example, compute requirements have been estimated to have grown fourto fivefold yearly for the past decade).
When DoD considers acquiring generative AI for influence applications, it needs to consider several risks and associated challenges:
Not having adequate technological skill or literacy also presents a challenge for identifying the needed capabilities and requirements; acquiring hardware with the needed capacity and the most up-to-date software; establishing processes for verification, validation, testing, and evaluation; and keeping up with rapid technological advances.
DoD has found it challenging for software requirements to be defined and software to be acquired through traditional requirements and acquisition processes for hardware-intensive weapon systems (e.g., aircraft or ships). As a result, DoD has established distinct acquisition pathways and alternative requirements processes for software. But some categories of technologies—such as cyber (computers and other information-related entities) and generative AI—do not conform neatly to the existing acquisition pathways.
Experts identified an array of challenges related to determining the capabilities required for generative AI use in influence operations, acquiring those capabilities, and putting them in the hands of users.
They also offered some suggestions for meeting those challenges (for examples, see Table 1).
To address the challenges presented by generative AI acquisitions, members of the influence community described employing a variety of the available pathways, but each presents its own challenges (see Table 2).
This piecemeal approach to acquisition of generative AI tools and services means that the influence community never develops the road map or architecture that developers, acquirers, and operators need to adhere to for a coordinated effort and to conduct operations at scale. And beyond initial acquisition, sustainment efforts would greatly benefit from a coordinated strategy across organizations performing influence. In 2023, with this in mind, DoD released its Data, Analytics, and Artificial Intelligence Adoption Strategy, which provides a framework for determining the appropriateness of acquiring shared AI. This framework (a simplified version of what is shown in Figure 1) considers the trade-offs between the complexity of implementing a particular AI tool and the similarity of intended outcomes across organizations to steer the selection of a shared or centralized AI tool versus a bespoke AI tool.
Within the framework, an AI tool can be identified along a level of capability (horizontal axis), from specific to the influence community to generalizable across the broader defense community. An AI tool can also be identified along a scale of capability adoption (vertical axis) as adopted by an individual unit or more broadly adopted by all of DoD. Where a tool falls along these spectra can help determine how that tool is most appropriately acquired. Broadly applicable tools that large portions of DoD adopt, such as ChatGPT, should be procured and maintained broadly by DoD or the Chief Digital and Artificial Intelligence Office (CDAO). In contrast, tools that are specific to the influence mission, such as the Army’s Ghost Machine, should be acquired by an individual unit or organization. (Ghost Machine is a tool developed by the U.S. Army that enables operators to use inexpensive, accessible technology for deception by mimicking, targeting, and influencing enemy soldiers.)
This conceptual diagram shows three contiguous, colored sections representing different approaches to adopting capabilities within the Department of Defense (DoD), depending on the breadth of the capability and the scale of adoption.
The diagram consists of three diagonally stacked bands:
The chart visually suggests that as the scope of adoption and capability become broader, organizations move from in-house development, through partnerships and collaborations, to innovative contracting mechanisms and traditional acquisition methods.
In any discussion of influence activities, much of the attention tends to focus on policy issues, such as ethical boundaries, rather than on the administrative and technical challenges and barriers. The concerns raised by the experts and members of the influence community alike suggested a series of recommended steps that, if implemented, will help overcome the challenges and guide this work in a positive direction (see Table 3).
This research underscores the critical role of generative AI in enhancing DoD’s influence activities in both competition and conflict. The complexities and scale of strategic competition necessitate advanced AI capabilities to process and analyze large volumes of information, create tailored content, and ultimately maintain an operational edge. Generative AI can improve analysis, operational planning, and assessment of influence activities. But it presents a tool, not the answer, and maximizing its potential will take dedicated effort in several areas. Relying on legacy acquisition systems to build this capability incurs risk that DoD cannot afford. As DoD expands and accelerates its acquisition approaches for software, it must also consider the unique requirements of AI acquisition.
This publication is part of the RAND research brief series. Research briefs present policy-oriented summaries of individual published, peer-reviewed documents or of a body of published work.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.