Governance Approaches to Securing Frontier AI

Ian Mitch, Matthew J. Malone, Karen Schwindt, Gregory Smith, Wesley Hurd, Henry Alexander Bradley, James Gimbi

ResearchPublished Oct 7, 2025

Growing concerns about the societal risks posed by advanced artificial intelligence (AI) systems have prompted debate over whether and how the U.S. government should promote stronger security practices among private-sector developers. Although some companies have made voluntary security commitments, competitive pressures and inconsistent approaches raise questions about the adequacy of self-regulation. At the same time, government intervention carries risks: Overly stringent security requirements could limit innovation, create barriers for small firms, and harm U.S. competitiveness. In this report, the authors help navigate these issues and identify a variety of practicable policy options for government and industry to strengthen frontier AI security.

For their analysis, the authors drew on case studies of U.S. security compliance regimes, expert interviews, and targeted literature reviews. They examined seven diverse compliance frameworks across U.S. industries, such as the nuclear, chemical, and health care industries, to identify lessons and governance models to inform AI security approaches. They also conducted interviews with government, AI industry, and cybersecurity experts to understand challenges and opportunities for strengthening security at frontier AI labs.

The authors identified four distinct governance approaches to strengthen security practices among developers of advanced AI systems to reduce the risk of theft, misuses, or compromise. These approaches span a spectrum, from federal regulation mandating the adoption of security standards to voluntary partnerships between government and industry to strengthen security practices. This work enables decisionmakers to better weigh trade-offs and find the right balance between strengthening security and preserving innovation.

Key Findings

Four foundational elements underpin security regimes that are critical to achieving compliance and strengthening security

  • Leadership and institutional capacities are organizational elements that provide authority, resources, and expertise needed to design and implement the regime.
  • Security requirements establish expectations for how entities should protect systems, data, and physical assets and form the foundation for accountability and oversight.
  • Compliance verification includes the processes used to assess whether entities meet established security requirements, such as audits and reporting requirements.
  • Enforcement mechanisms are tools to drive compliance, including penalties for noncompliance and revocation of benefits.

Two additional principles should guide the design and implementation of compliance regimes: proportionality and stakeholder engagement with transparency

  • Together, these principles aim to minimize undue burdens, enhance the regime's legitimacy among affected parties, and improve the likelihood of compliance.

Informed by these lessons, the authors identified four distinct governance approaches to strengthen AI security

  • Three approaches are illustrative compliance regimes for advanced AI systems that enforce the adoption of common security requirements: a government-enforced standards regime for developers of high-risk models, a government-led program authorizing AI developers for federal use, and an industry-led consortium that certifies compliant firms.
  • A fourth approach entails self-regulation with enhanced voluntary collaboration between government and industry in security standards development, intelligence- and information-sharing, and technical expertise.
  • Selecting the appropriate approach should be guided by the underlying rationale for the option, the perceived level of risk posed by AI systems, and the extent to which market incentives are seen as sufficient to address them.

Recommendations

  • If policymakers judge that frontier AI could pose substantial risks to society, they should consider establishing a regulatory regime that requires all high-risk model developers to adopt robust security standards to mitigate threats of theft, misuse, or compromise. Of the governance options identified, this approach would set the highest security bar, but it would also impose the most significant costs and burdens on industry.
  • Alternatively, policymakers could establish a compliance regime that conditions federal use of AI models on meeting security requirements, ensuring the integrity of systems deployed in sensitive government environments. Because this approach applies only to developers who choose to work with the government, it would impose comparatively less burden on industry — but at the cost of potentially limited coverage and less robust security.
  • Beyond government-led regimes, frontier AI developers could establish an industry-led consortium that sets and enforces shared security practices to reduce competitive pressures that discourage investment in safeguards. Although this approach offers weak incentives for participation because it relies on voluntary agreement, its industry-led nature could potentially foster ownership and ensure that security standards reflect real-world constraints and technical expertise.
  • If government and industry judge that a formal compliance regime is not warranted, they should instead collaborate to develop AI security standards, enhance threat and vulnerability information-sharing, support red-team evaluations and penetration testing, and strengthen lab personnel security to reduce insider risks. This voluntary approach avoids burdens on developers but may result in uneven security practices across the industry.

NOTE: The title of this publication was updated on October 9, 2025.

Topics

Document Details

Citation

Chicago Manual of Style

Mitch, Ian, Matthew J. Malone, Karen Schwindt, Gregory Smith, Wesley Hurd, Henry Alexander Bradley, and James Gimbi, Governance Approaches to Securing Frontier AI. Santa Monica, CA: RAND Corporation, 2025. https://www.rand.org/pubs/research_reports/RRA4159-1.html.
BibTeX RIS

Research conducted by

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.