When Should We Worry About AI Being Used to Design a Pathogen?

Biology and AI Experts Weigh In

David Manheim, Adeline E. Williams, Casey Aveggio, Allison Berke

Research SummaryPublished Oct 1, 2025

Key Findings

  • A massive, worldwide safety concern has been the risk that artificial intelligence (AI) could be used not just to manipulate existing pathogens but to create novel lethal pathogens, and there is an even deeper concern that, in the future, AI could create them autonomously.
  • In 2025 and the near term, AI is and will likely continue to be an assistive tool rather than an independent driver of biological design.
  • AI already plays various roles in helping researchers conduct bioengineering and adjacent tasks but is not yet autonomous.
  • As AI models become more capable, the risk landscape is shifting and is expected to expand over the longer term (i.e., after 2027), although experts were very uncertain how rapidly capabilities would evolve.
  • The limits of AI models are interdependent and context dependent.
  • AI’s effectiveness depends on the quality of the biological data used to develop and train the model.
  • No fundamental biological limits exist that would prevent AI from eventually having the capability to design pathogens.
  • Cooperation among stakeholders is needed to ensure appropriate monitoring, governance, and mitigation measures.

In the past ten years, the health care industry has seen applications of artificial intelligence (AI)–based tools proliferate wildly in diagnostic imaging, the self-management of such conditions as diabetes, and other areas. Less well known are the ways that biology researchers are now employing AI tools to quickly elucidate how the three-dimensional structures of cellular organelles and complex macromolecules (such as enzymes) dictate and enable their functions—efforts that previously would have taken years. AI is also enabling biology researchers to assess the impacts of modifying those structures, with potentially lifesaving—or lethal—implications for the health of individuals and the public. For example, AI has the potential to elucidate how a virus transmitted only by physical contact with an infected individual could be genetically altered to make its infectivity airborne, allowing—or preventing—the proliferation of dangerous pathogens that could start the next pandemic. And a major concern is whether AI could plausibly—intentionally or unintentionally—cause the genetic mutation itself.

Responding to the potential dangers that these advances would permit, researchers in several countries—and their respective governments—have initiated or collaborated to establish guidelines and limits for the safe and secure use of AI in biotechnology. But before scientists and governments can consider what guidelines are needed, they need to answer big questions about what constitutes safe, secure, and reliable AI; what AI could plausibly enable scientists to do; and what constitutes plausible risks, not just today, but in the near, not-too-distant, and distant futures.

To help refine the scope of biosecurity planning for AI-assisted bioengineering (in which AI input makes the research process easier or faster) and AI-driven bioengineering (in which AI provides blueprints or instructions), a group of researchers in RAND Global and Emerging Risks conducted two expert Delphi panels. The purpose of the panels was to proactively clarify which scenarios that involve the potential for AI to be used to alter pathogens fall within the envelope of plausible near-future capabilities and which scenarios are unlikely or impossible. In other words, the participants set out to assess the near-term limits of AI-assisted biological design, focusing on the feasibility of engineering novel pathogens. By delineating what is feasible and what might be impossible or unlikely in AI-assisted or AI-driven biological design in the near term, this work aims to refine the scope of biosecurity planning and improve the signal-to-noise ratio in discussions of the uses of AI in biotechnology or AI-enabled or AI-assisted biology (AIxBio).

A Brief History of Biotechnology Risk Considerations

For at least the past 50 years, policymakers have tried to control applications of emerging biotechnology out of concern about their potential risks. In February 1975, biochemists and geneticists convened the International Congress on Recombinant DNA Molecules at the Asilomar Conference Center in Monterey, California, to establish risk-based principles for conducting research that involved inserting foreign DNA into host cells to enable the genes encoded by that DNA to be copied and expressed. The safety principles that emerged from the meeting ended a moratorium on such research, enabling unimaginable progress toward understanding the genetic basis of development and disease and finding cures for some diseases.

Over the years, periodic clashes among the various stakeholders in the biotechnology, computational biology, public health, medical ethics, and biosecurity policymaking communities have revealed their conflicting cultures, values, and perceptions of the risk of dual-use biotechnology.

Within the past decade, scientific groups have convened both in the United States and internationally to attempt to recreate Asilomar with the goal of exploring and reaching a consensus about the most-concerning uses of AI in biotechnology and drafting policy, albeit with broader international participation and a wider set of concerns. [1] A major sticking point that has arisen from these convenings is the lack of clarity about the natural constraints on biological systems—constraints that might serve to limit what AI might accomplish in modifying naturally existing pathogens or creating new ones. Thus, the RAND study team chose to convene two separate Delphi panels: one comprising experts in AI and computational biology and a second comprising experts in biology and bioengineering. The Delphi method was developed at RAND in the 1950s. It brings together groups of subject-matter experts and other stakeholders to elucidate areas of agreement and persistent disagreement and to identify the underlying rationales among groups of experts. Importantly, the aim of the Delphi panel is not to force consensus.

In this study, the two expert panels considered five questions:

  • What are the key areas of uncertainty or disagreement regarding the limits of engineered pathogens and AI-driven bioengineering?
  • What are significant unknowns that emerge from expert elicitation in the fields of biophysics, genetic engineering, and machine learning?
  • How do expert opinions converge or diverge regarding the plausibility of different proposed limits to biorisk and to bio-related AI models?
  • What are the specific conjectured limits that might make it possible to verify or falsify current assumptions about the scalability and functionality of AI-driven bioengineering systems?
  • Which assumptions about the limits of AI systems and bioengineering methods are most likely to be challenged or confirmed? Which types of breakthroughs or work will challenge or confirm these assumptions?

For each question, the panel members were asked to consider a list of the potential existing and near-term limits or constraints on AI’s ability to push the biotechnology envelope. Each potential limit was assessed on the likelihood that it could prevent AI from achieving a desired end.

The panels were explicitly not asked to consider the extent to which large language models (LLMs), a form of AI, could enable untrained actors to create harmful pathogens for the purpose of committing bioterrorism. This is because recent evidence strongly suggests that they could,[2] even though only a year earlier, a RAND red team exercise concluded that LLMs were not yet mature enough to do so.[3] This head-spinning shift in thinking reflects the astronomical speed of progress in AI capabilities.

Panel-Specific Findings

AI Expert Panel

Focusing on the kinds of potentially risky physiological manipulations that existing or near-term AI models could help elucidate, the AI expert panel agreed on the following five points:

  • In the next few years, AI could assist experts in pathogen design (e.g., by speeding up some tasks). But it could not yet do so independently or enable non-experts to do so.
  • AI’s power depends on the quality of the biological data used to develop and train the model. Incomplete or biased datasets will limit what an AI model can learn, resulting in inaccurate predictions of a pathogen’s real-world behavior, and the needed data on pathogen functionality can be difficult, if not impossible, to collect.
  • AI struggles with complex and long-term predictions. Elucidating questions with many potential moderating factors, such as host-pathogen interactions, remain challenging.
  • Experimental validation of AI-proposed design is a significant bottleneck. Real-world testing is slow, expensive, and infeasible, if not potentially unethical, and simulation can go only so far.
  • General-purpose AI models, such as LLMs, are less useful than ones tailored to solve narrow biological problems. These narrow models are unlikely to be usable for non-experts.

Biotechnology Expert Panel

Focusing on the near-term biological limits to pathogen engineering, the biotechnology panel agreed on the following five challenges:

  • Pathogen transmissibility is limited physically and biologically. A pathogen’s ability to spread is limited by its survival outside the host, mode of transmission, and its effect on host behavior (e.g., a dead or incapacitated host might not be ideal for the propagation of a line of pathogens).
  • Environmental stability is hard to engineer. Increasing a pathogen’s survivability outside the host can come with trade-offs, such as reducing effective replication, and these outcomes are difficult to predict computationally.
  • Fitness trade-offs constrain engineered pathogens. Like animals or plants selectively bred for specific traits using traditional breeding methods, genetically modified pathogens can become weaker in unrelated ways.
  • Engineering constraints depend on the type of pathogen. Whether bioengineering is used on viruses, bacteria, or other organisms, a simple modification of one type of organism can be impossible for another, given existing knowledge.
  • Human responses to infections can limit the results of genetic modifications intended to increase spread. Immunization, social distancing or quarantine, and medical countermeasures can rapidly blunt the spread of even well-designed pathogens.

Overall Consensus Between the Panels

Combing the transcripts of the panels’ deliberations, the researchers found some areas of agreement and uncertainty between the panels.

AI is already an assistive tool and will remain so in the near term (for example, by carrying on automated work in cloud labs), but it cannot act on its own to alter or design a new pathogen. Panel members agreed that, without the creation of intelligent systems, such an ability would not emerge within the next two years (i.e., by 2027).

AI already assists with various bioengineering and adjacent tasks for both experts and malicious actors, especially the recognition of patterns in large datasets and hypothesis formulation and validation. Experts noted that AI might be able to automate some laboratory work in the near term, but such capabilities would be constrained by the need for human guidance, interpretation, and validation.

The potential risks of AI involvement in the design of pathogen-based bioweapons are increasing, although experts differed in their estimations of how quickly. Views varied; some experts expected slow progress and marginally increased risk, while others had concerns that models could rapidly gain the ability to autonomously design novel bioweapons at some threshold capability level. Specific enablers of the shifting risk include automation, simulation, progress in clinical applications, and progress toward generally capable AI.

The limitations on the ability of AI to autonomously alter or create a novel bioweapon are complex, interdependent, and context dependent. A key takeaway from both the biological and AI elicitations is that many of the identified limits are interdependent, making it challenging to consider any single constraint in isolation. Experts from both studies consistently highlighted how various biological and technological constraints overlap, interact, and mutually reinforce one another, complicating efforts to engineer novel pathogens or enhance existing ones.

AI’s effectiveness in biological design (and everything else) depends on the quality and quantity of the biological data used to develop and train the model. Experts from both panels emphasized that data biases, gaps, and inconsistencies remain significant barriers to AI-driven biological design, particularly when it comes to generating novel pathogens or predicting complex biological functions accurately.

No strong fundamental limit to AI capabilities was identified. Despite existing limitations and challenges, panelists agreed that AI should eventually be able to design novel, viable pathogens.

Implications for AIxBio Policy

Despite growing concerns about the potential for AI to go rogue and autonomously design radically novel and dangerous pathogens, experts suggest that the worst fears remain implausible, at least through 2027. But progress in the ability of AI to act autonomously could radically change the risk outlook, and signs of such progress need to be monitored. Experts suggest that, instead of acting autonomously, AI tools are likely to continue to serve as accelerators for already capable actors. Although some experts anticipated rapid gains in AI capabilities, no consensus emerged on what direction these would take or how quickly.

Nevertheless, the panel findings support the need for governance and suggest that the two most immediate targets are ensuring monitoring of scientific progress as it affects the future risk landscape and mitigating the most-plausible and most-actionable risk pathways involved in using AI tools to assist in optimizing or modifying existing pathogens.

Monitoring would most fruitfully track progress in the following four areas:

  • capabilities for clinical applications of bioengineering
  • laboratory automation that could provide more-rapid feedback for both applications and the training of AI systems
  • reliable simulations of biological systems that would reduce the need for experimental validation or reliance on training data
  • generally capable AI systems (i.e., those with broad capabilities).

In the near term, experts must clarify where hard limits exist—for example, in pathogen modification, particularly at the intersection of biological and AI constraints. Doing so can significantly sharpen threat models and help policymakers focus on credible misuse cases. In this context, biological data infrastructure emerges as a key leverage point: The quality, accessibility, and oversight of genomic and experimental data will shape both the capabilities and the risks of AI-assisted bioengineering.

In addition, reinforcing traditional biosecurity safeguards will have immediate and general benefits because AI tools have increasingly lowered the threshold for sophisticated misuse. Strengthening and globally coordinating gene synthesis screening can help prevent the malicious or accidental creation of risky sequences, especially considering that experts agreed that AI accelerates iterative design. The growing potential for such risks demands that emerging cloud and automated lab platforms should be subject to oversight frameworks that include identity verification, experiment prescreening, and audit trails to prevent or at least detect misuse. At the same time, increased and sustained investment in core pandemic preparedness—such as rapid diagnostics, scalable vaccine platforms, and public health surge capacity—will provide a robust backstop. This investment would provide immediate benefits even for non­engineered pathogens and would reduce the impact of any AI-assisted biological threat that might bypass upstream controls.

Mitigating potential risk also involves fostering a culture of responsible AI use in biological research, particularly among developers and institutions deploying increasingly capable tools. Encouraging interdisciplinary coordination—especially among AI practitioners, biologists, and biosecurity experts—can help bridge gaps in understanding and align technical capabilities with risk awareness. Such cross-domain engagement is foundational for building flexible governance frameworks that can adapt to fast-moving developments through such mechanisms as horizon scanning and iterative threat modeling.

Finally, although self-governance approaches, such as predeployment review and responsible disclosure, are important and should play a larger role in fostering safety norms, they are inherently limited in anticipating novel misuse pathways or constraining determined actors, including state-sponsored programs. This constraint underscores the need to implement complementary regulatory and institutional safeguards as quickly as possible.

Notes

  1. National Academies of Science, Engineering, and Medicine, The Age of AI in the Life Sciences: Benefits and Biosecurity Considerations, National Academies Press, 2025. Return to content
  2. Roger Brent and T. Greg McKelvey, Jr., “Contemporary AI Foundation Models Increase Biological Weapons Risk,” arXiv, arXiv:2506.13798v1, June 12, 2025. Return to content
  3. Christopher A. Mouton, Caleb Lucas, and Ella Guest, The Operational Risks of AI in Large-Scale Biological Attacks: Results of a Red-Team Study, RAND Corporation, RR-A2977-2, 2024. As of September 4, 2025: https://www.rand.org/pubs/research_reports/RRA2977-2.html Return to content
Cover: When Should We Worry About AI Being Used to Design a Pathogen?

Available for Download

Topics

Document Details

Citation

Chicago Manual of Style

Manheim, David, Adeline E. Williams, Casey Aveggio, and Allison Berke, When Should We Worry About AI Being Used to Design a Pathogen? Biology and AI Experts Weigh In. Santa Monica, CA: RAND Corporation, 2025. https://www.rand.org/pubs/research_briefs/RBA4087-1.html.
BibTeX RIS

Research conducted by

This publication is part of the RAND research brief series. Research briefs present policy-oriented summaries of individual published, peer-reviewed documents or of a body of published work.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.