Artificial General Intelligence Forecasting and Scenario Analysis

State of the Field, Methodological Gaps, and Strategic Implications

Gopal P. Sarma, Sunny D. Bhatt, Michael Jacob, Rachel Steratore

ResearchPublished Mar 24, 2026

Over the past five years, expert forecasts for achieving artificial general intelligence (AGI) — defined as systems capable of performing most economically valuable work at or above human level across a wide range of domains — have shifted substantially from mid-century toward the near term (with some estimates in the 2030s or even sooner). Artificial intelligence (AI) systems are increasingly embedded in critical infrastructure, and decisionmakers — from government officials setting national policy, to investors allocating capital, to laboratory leaders planning research agendas — face a difficult situation in navigating uncertainty about both the timing and nature of advanced AI capabilities.

To help researchers, analysts, and decisionmakers orient to the landscape of AGI forecasting and the key sources of disagreement within it, the authors synthesize diverse AGI forecasting methodologies — including expert surveys, prediction markets, compute-centric models, and scenario analysis — to assess their reliability, identify the sources of expert disagreement, and provide a framework for decisionmaking under uncertainty.

Key Findings

AGI timeline estimates have shifted earlier across methods

  • Although individual forecasters occasionally revise estimates into the future, the consistency of this shift across independent methods strengthens the signal; however, all methods share significant limitations.

Forecasting infrastructure is immature

  • The field lacks resolved forecasts for calibration; benchmarks resistant to saturation and gaming; continuous, real-time insight into model capabilities; and independent validation of influential models. Decisionmakers are making decisions based on methodologies that are in nascent stages of development.

Definitional ambiguity drives some, but not all, disagreement

  • Much apparent disagreement reflects different definitions of AGI and different targets. However, substantial disagreement remains even when definitions and information are held constant: People with similar training, working in the same organizations and looking at the same data, often reach very different conclusions about timelines and risk.

The policy question is not "when will AGI arrive?" but "how should we prepare for a range of possible AI futures?"

  • Effective strategy under such uncertainty requires three qualities: flexibility to pursue different objectives as circumstances evolve, adaptiveness to respond to unanticipated developments, and robustness to shocks.

Recommendations

  • Rather than debate precise probabilities, planners should focus on scenarios that are plausible, consequential, and challenging — particularly those that lack adequate preparation.
  • Decisionmakers should develop contingency plans for near-term AGI scenarios, establish explicit triggers for reassessment, and match action timing to the type of evidence most relevant to each domain.
  • To increase uptake, forecasts should be structured around conditional questions: How do projected timelines or capability trajectories change under different investment, strategic, or diplomatic scenarios?
  • Current forecasting draws heavily on a narrow set of methods and disciplinary perspectives. Bringing in econometricians, cognitive scientists, historians of technology, and complex systems researchers could surface blind spots and challenge shared assumptions.
  • The degree to which AI systems are accelerating AI research is the leading indicator most relevant to rapid capability gains and potential discontinuities. Frontier laboratories are best positioned to track this internally; they should develop and improve standardized monitoring systems now using formats that could support broader information-sharing if coordination becomes necessary.

Topics

Document Details

Citation

Chicago Manual of Style

Sarma, Gopal P., Sunny D. Bhatt, Michael Jacob, and Rachel Steratore, Artificial General Intelligence Forecasting and Scenario Analysis: State of the Field, Methodological Gaps, and Strategic Implications. Santa Monica, CA: RAND Corporation, 2026. https://www.rand.org/pubs/research_reports/RRA4692-1.html.
BibTeX RIS

Research conducted by

This publication is part of the RAND research report series. Research reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND research reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.