Reality Checking a Major National R&D Investment in AI Trustworthiness, Safety, and Security

Weighing the Costs and Benefits of a $10 Billion Bet on Increasing the Robustness of the United States’ AI Future

Brian A. Jackson, Pauline Moore

ResearchPublished Mar 12, 2026

The rapid advance of artificial intelligence (AI) has simultaneously raised concerns that the technology might result in significant harms and raised hopes that its contributions across different industries and applications could produce major economic benefits. Both the U.S. government and the private sector have made significant investments in pursuit of these benefits, and there have been calls for similarly significant investments that are focused on reducing the potential for AI-caused risks, including potentially catastrophic risks.

In this report, the authors take on the question of how to think about a big bet on research and development (R&D) that is focused on AI trustworthiness, safety, and security: a $10 billion U.S. government investment. Policy debate about such large investments is often dominated by the assumptions that decisionmakers or others have going in—that is, whether they already think that AI is fundamentally unsafe or whether such investments would help or hurt national competitiveness. The authors take a different approach to the question by applying break-even analysis to consider how potential benefits, costs, and trade-offs might play out using a variety of defensible approximations for different factors. Rather than requiring agreement on contentious questions, such as the probability of AI catastrophe, the analysis explores how different beliefs about risk and benefit interact to affect whether such an investment would pay off.

Key Findings

  • Framing AI research investments around trustworthiness rather than solely safety captures two distinct pathways to benefit: (1) reducing the chance or severity of harmful AI incidents and (2) increasing the likelihood that advanced AI systems will be widely adopted and produce transformational economic growth. Because broad adoption of advanced AI depends on confidence that systems will perform reliably and not cause costly failures, investments that improve actual and perceived trustworthiness can accelerate diffusion across the economy—complementing innovation goals instead of competing with them.
  • Across wide ranges of uncertainty about both the probability of AI-caused catastrophe and the degree to which trustworthiness investments might promote adoption, a substantial R&D investment can break even. The analysis does not require belief in very high probabilities of AI disaster or very large effects on adoption for the investment to be justified; even modest effects on either dimension—or a combination of both—can be sufficient given the large potential stakes involved.
  • The possibility that safety-focused efforts could impose economic costs—by slowing innovation, diverting talent, or creating regulatory friction—is real and must be considered, but it does not eliminate the case for investment. When potential drag on AI-driven economic growth is included in the analysis, the conditions for breaking even become more demanding, underscoring the importance of designing trustworthiness initiatives that enable rather than constrain responsible development and deployment.

Topics

Document Details

Citation

Chicago Manual of Style

Jackson, Brian A. and Pauline Moore, Reality Checking a Major National R&D Investment in AI Trustworthiness, Safety, and Security: Weighing the Costs and Benefits of a $10 Billion Bet on Increasing the Robustness of the United States’ AI Future. Santa Monica, CA: RAND Corporation, 2026. https://www.rand.org/pubs/research_reports/RRA4718-1.html.
BibTeX RIS

Research conducted by

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.