Defining Hazardous Capabilities of Biological AI Models

Expert Convening to Inform Future Risk Assessment

Jaspreet Pannu, Sarah L. Gebauer, Henry Alexander Bradley, Dulani Woods, Doni Bloomfield, Allison Berke, Greg McKelvey, Jr., Anita Cicero, Tom Inglesby

Expert InsightsPublished Aug 25, 2025

Artificial intelligence (AI) models are used increasingly in biological research. Many AI models contain some degree of biological knowledge. Developers are also creating AI models that are able to carry out an array of increasingly sophisticated tasks, either under human guidance or autonomously, including by vesting some models with biological capabilities. Identifying which biological AI capabilities pose significant potential risk to global public health and which do not is an essential step toward ensuring responsible AI development. Both model developers and policymakers have been attentive to the possibility of AI models causing public harm, but further specificity regarding how these harms may arise in the realm of biology is needed. 

On June 3, 2024, the Johns Hopkins Center for Health Security and RAND convened a group of scientists, AI-model developers, biosecurity experts, and policymakers to discuss the potentially hazardous capabilities of biological AI models—models that are trained on or capable of meaningfully manipulating substantial quantities of biological data. The interdisciplinary group clarified the definitions of concerning biological capabilities that would raise significant public health concerns, acknowledging that one of the capabilities that was identified as concerning already exists. Some capabilities have not yet been achieved but are under active development, while others may be years away or may never materialize. Accurately forecasting the direction of AI advances was not considered a prerequisite to initiating this discussion on security. Instead, the aim was to determine specific model capabilities that, if achieved, would pose a serious threat to global public health.

Topics

Document Details

Citation

Chicago Manual of Style

Pannu, Jaspreet, Sarah L. Gebauer, Henry Alexander Bradley, Dulani Woods, Doni Bloomfield, Allison Berke, Greg McKelvey, Jr., Anita Cicero, and Tom Inglesby, Defining Hazardous Capabilities of Biological AI Models: Expert Convening to Inform Future Risk Assessment. Santa Monica, CA: RAND Corporation, 2025. https://www.rand.org/pubs/conf_proceedings/CFA3649-1.html.
BibTeX RIS

Research conducted by

This publication is part of the RAND conference proceeding series. Conference proceedings present a collection of papers delivered at a conference or a summary of the conference.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.