When Should We Worry About AI Being Used to Design a Pathogen?
Biology and AI Experts Weigh In
Research SummaryPublished Oct 1, 2025
Biology and AI Experts Weigh In
Research SummaryPublished Oct 1, 2025
In the past ten years, the health care industry has seen applications of artificial intelligence (AI)–based tools proliferate wildly in diagnostic imaging, the self-management of such conditions as diabetes, and other areas. Less well known are the ways that biology researchers are now employing AI tools to quickly elucidate how the three-dimensional structures of cellular organelles and complex macromolecules (such as enzymes) dictate and enable their functions—efforts that previously would have taken years. AI is also enabling biology researchers to assess the impacts of modifying those structures, with potentially lifesaving—or lethal—implications for the health of individuals and the public. For example, AI has the potential to elucidate how a virus transmitted only by physical contact with an infected individual could be genetically altered to make its infectivity airborne, allowing—or preventing—the proliferation of dangerous pathogens that could start the next pandemic. And a major concern is whether AI could plausibly—intentionally or unintentionally—cause the genetic mutation itself.
Responding to the potential dangers that these advances would permit, researchers in several countries—and their respective governments—have initiated or collaborated to establish guidelines and limits for the safe and secure use of AI in biotechnology. But before scientists and governments can consider what guidelines are needed, they need to answer big questions about what constitutes safe, secure, and reliable AI; what AI could plausibly enable scientists to do; and what constitutes plausible risks, not just today, but in the near, not-too-distant, and distant futures.
To help refine the scope of biosecurity planning for AI-assisted bioengineering (in which AI input makes the research process easier or faster) and AI-driven bioengineering (in which AI provides blueprints or instructions), a group of researchers in RAND Global and Emerging Risks conducted two expert Delphi panels. The purpose of the panels was to proactively clarify which scenarios that involve the potential for AI to be used to alter pathogens fall within the envelope of plausible near-future capabilities and which scenarios are unlikely or impossible. In other words, the participants set out to assess the near-term limits of AI-assisted biological design, focusing on the feasibility of engineering novel pathogens. By delineating what is feasible and what might be impossible or unlikely in AI-assisted or AI-driven biological design in the near term, this work aims to refine the scope of biosecurity planning and improve the signal-to-noise ratio in discussions of the uses of AI in biotechnology or AI-enabled or AI-assisted biology (AIxBio).
For at least the past 50 years, policymakers have tried to control applications of emerging biotechnology out of concern about their potential risks. In February 1975, biochemists and geneticists convened the International Congress on Recombinant DNA Molecules at the Asilomar Conference Center in Monterey, California, to establish risk-based principles for conducting research that involved inserting foreign DNA into host cells to enable the genes encoded by that DNA to be copied and expressed. The safety principles that emerged from the meeting ended a moratorium on such research, enabling unimaginable progress toward understanding the genetic basis of development and disease and finding cures for some diseases.
Over the years, periodic clashes among the various stakeholders in the biotechnology, computational biology, public health, medical ethics, and biosecurity policymaking communities have revealed their conflicting cultures, values, and perceptions of the risk of dual-use biotechnology.
Within the past decade, scientific groups have convened both in the United States and internationally to attempt to recreate Asilomar with the goal of exploring and reaching a consensus about the most-concerning uses of AI in biotechnology and drafting policy, albeit with broader international participation and a wider set of concerns. [1] A major sticking point that has arisen from these convenings is the lack of clarity about the natural constraints on biological systems—constraints that might serve to limit what AI might accomplish in modifying naturally existing pathogens or creating new ones. Thus, the RAND study team chose to convene two separate Delphi panels: one comprising experts in AI and computational biology and a second comprising experts in biology and bioengineering. The Delphi method was developed at RAND in the 1950s. It brings together groups of subject-matter experts and other stakeholders to elucidate areas of agreement and persistent disagreement and to identify the underlying rationales among groups of experts. Importantly, the aim of the Delphi panel is not to force consensus.
In this study, the two expert panels considered five questions:
For each question, the panel members were asked to consider a list of the potential existing and near-term limits or constraints on AI’s ability to push the biotechnology envelope. Each potential limit was assessed on the likelihood that it could prevent AI from achieving a desired end.
The panels were explicitly not asked to consider the extent to which large language models (LLMs), a form of AI, could enable untrained actors to create harmful pathogens for the purpose of committing bioterrorism. This is because recent evidence strongly suggests that they could,[2] even though only a year earlier, a RAND red team exercise concluded that LLMs were not yet mature enough to do so.[3] This head-spinning shift in thinking reflects the astronomical speed of progress in AI capabilities.
Focusing on the kinds of potentially risky physiological manipulations that existing or near-term AI models could help elucidate, the AI expert panel agreed on the following five points:
Focusing on the near-term biological limits to pathogen engineering, the biotechnology panel agreed on the following five challenges:
Combing the transcripts of the panels’ deliberations, the researchers found some areas of agreement and uncertainty between the panels.
AI is already an assistive tool and will remain so in the near term (for example, by carrying on automated work in cloud labs), but it cannot act on its own to alter or design a new pathogen. Panel members agreed that, without the creation of intelligent systems, such an ability would not emerge within the next two years (i.e., by 2027).
AI already assists with various bioengineering and adjacent tasks for both experts and malicious actors, especially the recognition of patterns in large datasets and hypothesis formulation and validation. Experts noted that AI might be able to automate some laboratory work in the near term, but such capabilities would be constrained by the need for human guidance, interpretation, and validation.
The potential risks of AI involvement in the design of pathogen-based bioweapons are increasing, although experts differed in their estimations of how quickly. Views varied; some experts expected slow progress and marginally increased risk, while others had concerns that models could rapidly gain the ability to autonomously design novel bioweapons at some threshold capability level. Specific enablers of the shifting risk include automation, simulation, progress in clinical applications, and progress toward generally capable AI.
The limitations on the ability of AI to autonomously alter or create a novel bioweapon are complex, interdependent, and context dependent. A key takeaway from both the biological and AI elicitations is that many of the identified limits are interdependent, making it challenging to consider any single constraint in isolation. Experts from both studies consistently highlighted how various biological and technological constraints overlap, interact, and mutually reinforce one another, complicating efforts to engineer novel pathogens or enhance existing ones.
AI’s effectiveness in biological design (and everything else) depends on the quality and quantity of the biological data used to develop and train the model. Experts from both panels emphasized that data biases, gaps, and inconsistencies remain significant barriers to AI-driven biological design, particularly when it comes to generating novel pathogens or predicting complex biological functions accurately.
No strong fundamental limit to AI capabilities was identified. Despite existing limitations and challenges, panelists agreed that AI should eventually be able to design novel, viable pathogens.
Despite growing concerns about the potential for AI to go rogue and autonomously design radically novel and dangerous pathogens, experts suggest that the worst fears remain implausible, at least through 2027. But progress in the ability of AI to act autonomously could radically change the risk outlook, and signs of such progress need to be monitored. Experts suggest that, instead of acting autonomously, AI tools are likely to continue to serve as accelerators for already capable actors. Although some experts anticipated rapid gains in AI capabilities, no consensus emerged on what direction these would take or how quickly.
Nevertheless, the panel findings support the need for governance and suggest that the two most immediate targets are ensuring monitoring of scientific progress as it affects the future risk landscape and mitigating the most-plausible and most-actionable risk pathways involved in using AI tools to assist in optimizing or modifying existing pathogens.
Monitoring would most fruitfully track progress in the following four areas:
In the near term, experts must clarify where hard limits exist—for example, in pathogen modification, particularly at the intersection of biological and AI constraints. Doing so can significantly sharpen threat models and help policymakers focus on credible misuse cases. In this context, biological data infrastructure emerges as a key leverage point: The quality, accessibility, and oversight of genomic and experimental data will shape both the capabilities and the risks of AI-assisted bioengineering.
In addition, reinforcing traditional biosecurity safeguards will have immediate and general benefits because AI tools have increasingly lowered the threshold for sophisticated misuse. Strengthening and globally coordinating gene synthesis screening can help prevent the malicious or accidental creation of risky sequences, especially considering that experts agreed that AI accelerates iterative design. The growing potential for such risks demands that emerging cloud and automated lab platforms should be subject to oversight frameworks that include identity verification, experiment prescreening, and audit trails to prevent or at least detect misuse. At the same time, increased and sustained investment in core pandemic preparedness—such as rapid diagnostics, scalable vaccine platforms, and public health surge capacity—will provide a robust backstop. This investment would provide immediate benefits even for nonengineered pathogens and would reduce the impact of any AI-assisted biological threat that might bypass upstream controls.
Mitigating potential risk also involves fostering a culture of responsible AI use in biological research, particularly among developers and institutions deploying increasingly capable tools. Encouraging interdisciplinary coordination—especially among AI practitioners, biologists, and biosecurity experts—can help bridge gaps in understanding and align technical capabilities with risk awareness. Such cross-domain engagement is foundational for building flexible governance frameworks that can adapt to fast-moving developments through such mechanisms as horizon scanning and iterative threat modeling.
Finally, although self-governance approaches, such as predeployment review and responsible disclosure, are important and should play a larger role in fostering safety norms, they are inherently limited in anticipating novel misuse pathways or constraining determined actors, including state-sponsored programs. This constraint underscores the need to implement complementary regulatory and institutional safeguards as quickly as possible.
This publication is part of the RAND research brief series. Research briefs present policy-oriented summaries of individual published, peer-reviewed documents or of a body of published work.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.