Strategic Cooperation on AI
Core Functions
ResearchPublished Mar 10, 2026
Strategic cooperation among states and other actors may be crucial for harnessing AI's benefits and addressing its risks. This report identifies four core functions that such cooperation could serve — research, standard-setting, monitoring, and verification — and examines how 17 existing international organizations implement them, drawing out patterns and insights relevant for AI policy.
Core Functions
ResearchPublished Mar 10, 2026
As artificial intelligence (AI) advances and its global impacts deepen, strategic cooperation among states and other actors becomes increasingly important. This report examines what functions such cooperation could serve and how those functions are currently implemented in other domains.
The report identifies three objectives that strategic cooperation on AI could advance — improving understanding of AI capabilities and risks, promoting reliable AI development while managing proliferation, and preparing to mitigate and respond to harms — and four core functions likely to be relevant across a range of possible futures: research, standard-setting, monitoring, and verification.
To ground this analysis, the report examines how 17 existing international organizations implement these functions, drawing out patterns and implementation insights. Three key findings emerge: Functions are rarely performed in isolation; the same function can be implemented in substantially different ways depending on context; and even well-designed functions face implementation challenges that organizations address through mechanisms ranging from capacity-building to reputational pressure.
The functions identified are not tied to any particular institutional form; they can be implemented through formal international organizations, multilateral or bilateral arrangements, ad hoc coalitions, or public-private mechanisms. This report is intended for policymakers, AI researchers, and other stakeholders working to advance effective strategic cooperation on AI.
This research was independently initiated and conducted by the Center on AI, Security, and Technology within RAND Global and Emerging Risks using income from operations and gifts and grants from philanthropic supporters.
This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.