Understanding the Theoretical Limits of AI-Enabled Pathogen Design

Insights from a Delphi Study

David Manheim, Adeline E. Williams, Casey Aveggio, Allison Berke

ResearchPublished Sep 24, 2025

Concerns that artificial intelligence (AI) might enable pathogen design are increasing, but risks and timelines remain unclear. In this report, the authors present findings from a Delphi study designed to assess the near-term limits of AI-enabled biological design (AIxBio). Rather than forecasting specific risks, the authors’ central objective was to identify which biological and AI-related constraints might serve as hard or persistent barriers to this specific misuse. To do so, they conducted two parallel Delphi elicitations, engaging experts in biology and AI to independently and comparatively evaluate the limits that each field faces. The authors asked participants in the study to assess the validity and applicability of a set of proposed constraints within the near-term future (from 2025 to 2027). The constraints included biological trade-offs, such as transmissibility and environmental stability, and technical challenges in data availability and AI model generalization.

The overarching goal of this study is to inform policy and risk analysis by clarifying which scenarios fall outside the plausible envelope of near-future capabilities and which will remain impossible indefinitely. By focusing on what might be impossible or unlikely, this research can help refine the scope of biosecurity planning and improve the signal-to-noise ratio in discussions about AIxBio.

Key Findings

  • There is a risk that AI could be used not just by bioterrorists using extant pathogens but to create novel lethal pathogens.
  • However, existing and near-term AI is an assistive tool for both experts and malicious actors rather than an independent driver of biological design. Despite that limitation, the research suggested that this could change in the longer term (i.e., after 2027).
  • AI already helps experts do bioengineering and adjacent tasks in various ways. Existing AI systems augment biological and bioengineering research by optimizing designs and research and validation processes, assisting with pattern recognition, and greatly speeding up hypothesis generation.
  • The risks posed by AIxBio are shifting and could greatly increase beyond just enabling bioterrorism. In the longer term, experts were very unsure how rapidly capabilities would evolve, and their views varied significantly.
  • Limits are interdependent and context-dependent. Participants from both studies consistently highlighted how various biological and technological constraints overlap, interact, and mutually reinforce one another, complicating efforts to engineer novel pathogens or enhance existing ones.
  • AI effectiveness depends on the quality of biological data. Participants emphasized that data biases, gaps, and inconsistencies remain significant barriers to AIxBio, particularly when it comes to generating novel pathogens or predicting complex biological functions accurately.
  • No strong fundamental limit to AI capabilities was found. Despite the practical and significant near-term challenges, the possibility that AI could design pathogens is not outside the fundamental limits of biological systems nor outside the likely future capacity of increasingly more-general AI capabilities.

Recommendations

  • Policymakers should prioritize monitoring scientific progress and future risks, especially indicators of future risk, such as progress toward clinical applications of bioengineering, laboratory automation, simulations of biological systems, and generally capable AI systems.
  • Policymakers should explore ways to mitigate the most-plausible and most-actionable risk pathways—specifically, the use of AI tools to assist in optimizing or modifying existing pathogens.
  • Policymakers should promote efforts to strengthen and globally coordinate gene-synthesis screening to help prevent the malicious or accidental creation of risky sequences.
  • Policymakers should subject emerging cloud and automated lab platforms to oversight frameworks that include identity verification, experiment prescreening, and audit trails to prevent or at least detect misuse.
  • Policymakers should increase and sustain investments in core pandemic preparedness efforts, such as rapid diagnostics, scalable vaccine platforms, and public health surge capacity.
  • Stakeholders should cultivate a culture of responsible AI use in biological research, particularly among developers and institutions deploying increasingly capable tools.

Topics

Document Details

Citation

Chicago Manual of Style

Manheim, David, Adeline E. Williams, Casey Aveggio, and Allison Berke, Understanding the Theoretical Limits of AI-Enabled Pathogen Design: Insights from a Delphi Study. Santa Monica, CA: RAND Corporation, 2025. https://www.rand.org/pubs/research_reports/RRA4087-1.html.
BibTeX RIS

Research conducted by

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.