Biosecurity governance across uncertain artificial intelligence futures

Perspectives from expert workshop on managing AI-related biological risks and opportunities at the intersection of artificial intelligence and life sciences.

DNA sequencing result on a turquoise background

Photo by africaimages.com (Olga Yastremska, Africa Images)/Adobe Stock

What is the issue?

Rapid advances in artificial intelligence (AI) tools and capabilities applied to life sciences offer transformative benefits for science and medicine but also create new risks for global biosecurity. Tools and capabilities could potentially be misused by various threat actors to develop and deploy dangerous biological agents. Current oversight and prohibition norms like the Biological Weapons Convention were developed before current AI capabilities emerged and are not well-suited to address these unique challenges. The scientific and policy communities also lack comprehensive understanding of how capable frontier AI models are and the ceiling of harm they might enable in the biological domain. This hampers the ability to develop preventative and defensive capabilities to guard against societal harm.

How did we help?

RAND Europe and RAND Global and Emerging Risks partnered with the Nuclear Threat Initiative to convene a workshop bringing together approximately 25 leading experts from government, industry, and research institutions. The event, held alongside the 2025 AI Action Summit in Paris, aimed to assess expert perspectives on emerging AI-biosecurity challenges and identify practical solutions. The team administered pre-workshop questionnaires and facilitated three focused sessions examining: risk thresholds and mitigations; strategic research agendas for reducing anticipated risks; and opportunities for collaboration, resource sharing, and best practices among stakeholders.

What did we find?

Workshop participants identified practical steps to reduce risks and build resilient systems. Key themes included: shifting from broad hypothetical scenarios to specific, measurable risk indicators; strong support for modular "if-then" hazard-threshold frameworks that tie concrete triggers to predefined responses; the need for collaborative, cross-institutional methods to measure AI's biosecurity impact; importance of governance systems that can evolve alongside fast-moving technologies; critical role of cross-sector coordination and privacy-preserving information sharing; and the value of technical solutions like access controls and auditing systems to reinforce biosafety norms.

What can be done?

Participants proposed eight key actions: convene flexible "coalitions of the willing" to develop adaptable solutions; identify focal points for cross-sector collaboration through organisations like WHO and UN institutes; strengthen coordination among government agencies and regulatory bodies; develop international standards including unified safety frameworks; use strategic scenario planning to strengthen preparedness; promote clear, audience-appropriate risk communication; invest in risk measurement, assessment tools, and safe controlled testing environments; and adopt a "responsible innovation" lens that balances progress with risk reduction. These measures require sustained funding, public-private partnerships, and commitment to transparency across sectors.