Contemporary Foundation AI Models Increase Biological Weapons Risk

Roger Brent, Greg McKelvey, Jr.

Expert InsightsPublished Dec 31, 2025

The rapid advancement of artificial intelligence (AI) capabilities has sparked significant concern regarding AI’s potential to facilitate biological weapons development. Flawed safety assessments that rely on tacit knowledge and inadequate benchmarks may create a false sense of security, leading to an increased probability that such weapons will be used.

To challenge the importance of tacit knowledge in biological weapons development, the authors consider a case study involving a Norwegian ultranationalist who successfully carried out complex chemical syntheses to construct an explosive and past efforts to document the steps to produce contagious viral pathogens. Drawing on these examples, they identify the elements of success for goal-directed technical activities that large language models can describe in words for technical development projects.

Engaging in dialogues with three 2024 foundation AI models—Llama 3.1 405B, ChatGPT-4o, and Claude 3.5 Sonnet (new)—the authors document how these models successfully provide accurate instructions and guidance for recovering a live poliovirus from a construct built from commercially obtained synthetic DNA, a test case applicable to producing other pathogenic viruses. These examples demonstrate that models are already capable of guiding motivated users to develop biological weapons.

The authors support improved benchmarks derived from a task structure framework to enable more-comprehensive assessments of AI models’ ability to guide users through the key elements of success. Such benchmarks could also help guide supervised fine-tuning to mitigate risks from future models before deployment. However, while better benchmarks might help mitigate risk, broader interventions may still be needed to avoid catastrophic outcomes.

Topics

Document Details

Citation

Chicago Manual of Style

Brent, Roger and Greg McKelvey, Jr., Contemporary Foundation AI Models Increase Biological Weapons Risk. Santa Monica, CA: RAND Corporation, 2025. https://www.rand.org/pubs/perspectives/PEA3853-1.html.
BibTeX RIS

Research conducted by

This publication is part of the RAND expert insights series. The expert insights series presents perspectives on timely policy issues.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.