Legal and Policy Approaches to Mitigate Catastrophic Harms from AI

Sasha Romanosky, Elina Treyger, Elie Alhajjar

ResearchPublished Mar 25, 2026

The rapid advancement of artificial intelligence (AI) and large language models has generated growing concern about their potential to cause catastrophic harms, including cyberattacks, bioweapon misuse, and loss of human control. Despite numerous governmental, nongovernmental, and industry efforts to establish safeguards, the effectiveness of various legal and policy measures in mitigating such risks remains uncertain.

To address this gap and identify the most promising approaches to reduce the probability of AI-induced catastrophic events, RAND researchers conducted an online Delphi study with 24 U.S.-based experts in AI technology and policy from January to February 2025.

Across three rounds of elicitation, participants evaluated 11 categories of legal and policy measures. Overall, experts expressed limited optimism regarding the feasibility and desirability of most options, with skepticism increasing over time.

The findings suggest that comprehensive federal action to mitigate catastrophic AI risks is unlikely in the near term. Instead, progress may depend on state-level initiatives and voluntary measures by industry and nongovernmental actors. Although even the most promising approaches have limitations, targeted mechanisms, such as structured disclosure programs and legal safe harbors for researchers, could enhance their effectiveness in promoting AI safety and accountability.

Key Findings

  • In general, participants did not convey strong optimism about the potential for the 11 legal and policy categories presented to incentivize behavior that would reduce the probability of a catastrophic AI-caused harm. Most categories were assessed to be of uncertain desirability and feasibility.
  • Participants became more skeptical about the feasibility of most of the categories by the end of the elicitation.
  • The two most promising legal and policy categories to shape incentives for AI developers to reduce the probability of catastrophic AI-caused harms were incentives to find and disclose risks and voluntary safety standards.
  • Heavier-handed and stricter regulations, such as mandatory audits, government-imposed restrictions on actors, and mandatory safety standards, emerged as the least promising categories.
  • The likelihood of comprehensive measures being adopted by the federal government to reduce catastrophic AI risks in the near term (five to ten years) was perceived as low.
  • Decisionmakers may be well-served by focusing on the more promising categories that can be implemented without federal government involvement.
  • Some shortcomings could be addressed in the near term, such as through appropriately structured disclosure programs and legal safe harbors for researchers.

Topics

Document Details

Citation

Chicago Manual of Style

Romanosky, Sasha, Elina Treyger, and Elie Alhajjar, Legal and Policy Approaches to Mitigate Catastrophic Harms from AI. Santa Monica, CA: RAND Corporation, 2026. https://www.rand.org/pubs/research_reports/RRA4266-1.html.
BibTeX RIS

Research conducted by

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.