Reality Checking a Major National R&D Investment in AI Trustworthiness, Safety, and Security
Weighing the Costs and Benefits of a $10 Billion Bet on Increasing the Robustness of the United States’ AI Future
ResearchPublished Mar 12, 2026
In this report, the authors use break-even analysis to examine whether a major national investment in artificial intelligence (AI) trustworthiness research and development—framed as a $10 billion expenditure—could be justified across a variety of plausible assumptions about AI risk, economic benefit, and potential costs.
Weighing the Costs and Benefits of a $10 Billion Bet on Increasing the Robustness of the United States’ AI Future
ResearchPublished Mar 12, 2026
The rapid advance of artificial intelligence (AI) has simultaneously raised concerns that the technology might result in significant harms and raised hopes that its contributions across different industries and applications could produce major economic benefits. Both the U.S. government and the private sector have made significant investments in pursuit of these benefits, and there have been calls for similarly significant investments that are focused on reducing the potential for AI-caused risks, including potentially catastrophic risks.
In this report, the authors take on the question of how to think about a big bet on research and development (R&D) that is focused on AI trustworthiness, safety, and security: a $10 billion U.S. government investment. Policy debate about such large investments is often dominated by the assumptions that decisionmakers or others have going in—that is, whether they already think that AI is fundamentally unsafe or whether such investments would help or hurt national competitiveness. The authors take a different approach to the question by applying break-even analysis to consider how potential benefits, costs, and trade-offs might play out using a variety of defensible approximations for different factors. Rather than requiring agreement on contentious questions, such as the probability of AI catastrophe, the analysis explores how different beliefs about risk and benefit interact to affect whether such an investment would pay off.
This research was independently initiated and conducted by the Center for the Geopolitics of Artificial General Intelligence within RAND Global and Emerging Risks using income from operations and gifts from RAND supporters, including philanthropic gifts.
This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.