Strategic Cooperation on AI

Core Functions

Brodi Kotila, Katherine H. Tucker, Samantha Cherney, Austin Wyatt

ResearchPublished Mar 10, 2026

As artificial intelligence (AI) advances and its global impacts deepen, strategic cooperation among states and other actors becomes increasingly important. This report examines what functions such cooperation could serve and how those functions are currently implemented in other domains.

The report identifies three objectives that strategic cooperation on AI could advance — improving understanding of AI capabilities and risks, promoting reliable AI development while managing proliferation, and preparing to mitigate and respond to harms — and four core functions likely to be relevant across a range of possible futures: research, standard-setting, monitoring, and verification.

To ground this analysis, the report examines how 17 existing international organizations implement these functions, drawing out patterns and implementation insights. Three key findings emerge: Functions are rarely performed in isolation; the same function can be implemented in substantially different ways depending on context; and even well-designed functions face implementation challenges that organizations address through mechanisms ranging from capacity-building to reputational pressure.

The functions identified are not tied to any particular institutional form; they can be implemented through formal international organizations, multilateral or bilateral arrangements, ad hoc coalitions, or public-private mechanisms. This report is intended for policymakers, AI researchers, and other stakeholders working to advance effective strategic cooperation on AI.

Key Findings

  • Four potential barriers to strategic cooperation on AI are misaligned incentives, deep uncertainty, competition between states for power and influence, and the inability to make credible commitments to a cooperative effort. Despite these barriers, cooperation is already underway through summits, AI safety institutes, and bilateral dialogues.
  • Three overlapping objectives might be pursued through strategic cooperation: improving understanding of AI capabilities and risks, promoting reliable AI development while managing proliferation, and mitigating and responding to AI-related harms.
  • Four core functions are likely to be necessary components of effective strategic cooperation on AI: research, standard-setting, monitoring, and verification. Supporting functions — such as norm-building, convening stakeholders, information-sharing, forecasting, and agenda-setting — enable the core functions.
  • Functions are rarely performed in isolation. Effective cooperation typically involves deliberate combinations of functions tailored to specific objectives.
  • The same function can be implemented in substantially different ways. Verification, for example, ranges from continuous remote monitoring to periodic inspections to peer review and public disclosure. The appropriate approach depends on what is being verified, the intrusiveness parties will accept, and technical feasibility.
  • Even well-designed functions face implementation challenges. Organizations have developed mechanisms — from capacity-building to reputational pressure — to bridge the gap between function design and effective implementation.
  • The functions identified are not tied to any particular institutional form. Policymakers can use this analysis to identify which functions are most critical for their objectives, consider how those functions might be combined, and draw on implementation approaches from other domains — whether they are designing formal agreements, ad hoc coalitions, or bilateral arrangements.

Topics

Document Details

Citation

Chicago Manual of Style

Kotila, Brodi, Katherine H. Tucker, Samantha Cherney, and Austin Wyatt, Strategic Cooperation on AI: Core Functions. Santa Monica, CA: RAND Corporation, 2026. https://www.rand.org/pubs/research_reports/RRA3849-1.html.
BibTeX RIS

Research conducted by

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.