Legal and Policy Approaches to Mitigate Catastrophic Harms from AI
ResearchPublished Mar 25, 2026
Rapid advances in artificial intelligence (AI) have raised fears of catastrophic harms, such as cyberattacks and bioweapon misuse. To explore ways to reduce these risks, RAND researchers held a Delphi study with 24 U.S. AI and policy experts. Participants saw few feasible policy options, favoring risk disclosure incentives and voluntary safety standards over strict regulation. Near-term progress may rely on state and industry action.
ResearchPublished Mar 25, 2026
The rapid advancement of artificial intelligence (AI) and large language models has generated growing concern about their potential to cause catastrophic harms, including cyberattacks, bioweapon misuse, and loss of human control. Despite numerous governmental, nongovernmental, and industry efforts to establish safeguards, the effectiveness of various legal and policy measures in mitigating such risks remains uncertain.
To address this gap and identify the most promising approaches to reduce the probability of AI-induced catastrophic events, RAND researchers conducted an online Delphi study with 24 U.S.-based experts in AI technology and policy from January to February 2025.
Across three rounds of elicitation, participants evaluated 11 categories of legal and policy measures. Overall, experts expressed limited optimism regarding the feasibility and desirability of most options, with skepticism increasing over time.
The findings suggest that comprehensive federal action to mitigate catastrophic AI risks is unlikely in the near term. Instead, progress may depend on state-level initiatives and voluntary measures by industry and nongovernmental actors. Although even the most promising approaches have limitations, targeted mechanisms, such as structured disclosure programs and legal safe harbors for researchers, could enhance their effectiveness in promoting AI safety and accountability.
This research was independently initiated and conducted by the Center on AI, Security, and Technology within RAND Global and Emerging Risks using income from operations and gifts and grants from philanthropic supporters.
This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.