Federal Revenue When AI Replaces Labor: An Examination of Economic Scenarios with Highly Capable Artificial Intelligence
Nov 7, 2025
Data VizPublished Sep 23, 2025
From the internal combustion engine to computers and the internet, technological interventions have repeatedly transformed what we do and how we do it. As artificial intelligence (AI)Generative AI (GenAI) broadly refers to the use of pretrained AI models that generate text, images, and other content. Examples include the widely used large language models (LLMs), ChatGPT, and Claude.
Artificial general intelligence (AGI) is a theoretical concept without a consensus definition but generally refers to AI that is capable of performing general functions and tasks at or above human intelligence levels. capabilities rapidly advance in the 2020s, the world is witnessing another wave of transformative technological change. The growth of AI capabilities and their move into the mainstream have generated anxieties similar to those provoked by earlier technological shifts, prompting stakeholders across the United States to ask such questions as “What is artificial intelligence, and how does it compare with human intelligence?” and “Who will benefit from AI and who will lose?”
Many of the narratives that provide answers to these types of questions have focused on the most-extreme possibilities: AI will supercharge exponential growth, or AI will lead to potentially catastrophic risks. But the likelier prospect is a world dominated by the widespread, nonlinear adoption of non-superhuman AI in ways that, despite this uneven uptake, have transformative impacts on how people live their lives.
To meet these moments, policymakers will need to contend with competing demands that put innovation, productivity, and growth at potential odds with social and economic well-being factors, such as secure employment, a sense of purpose and belonging, and data privacy—and they will need to act quickly, given the scale and pace of AI adoption.
RAND's Social and Economic Policy Rethink Initiative has developed a volume of work on the opportunities and challenges presented by AI adoption. This volume aims to support policy, industry, and community leaders who are confronting key questions about AI: What are the social and economic policy stakes of AI adoption? What types of AI adoption trade-offs will policymakers need to manage? How can policymakers map AI impacts to develop agile AI responses?
What Are the Social and Economic Policy Stakes of AI Adoption?
Overall, RAND researchers found tremendous potential upsides to the adoption of AI, along with an urgent need to manage downside risks. Looking at macroeconomic impacts, they saw potential gains for productivity growth and the federal debt but acknowledged that, even with the best analysis, there is limited visibility into AI's impacts on the labor market. Their analysis of early evidence suggests that AI is complementing labor productivity, but there is a risk of future labor displacement as capabilities advance. In their analysis of the macroeconomic implications of AI, they found that, by 2035, moderate AI-driven productivity gains could boost real per-capita gross domestic product by nearly $7,000.
But these productivity gains must be balanced against questions about how human flourishing is being supported and what that means for the social contract. Risks and challenges from AI adoption that extend to privacy, oversight, and potential societal polarization require careful management and the protection of individual rights and democratic processes. Managing the challenges of AI adoption while enabling its opportunities will also require new metrics for understanding the complex, multidimensional impacts of AI adoption on the labor market, macroeconomy, and society.
Read More: “Macroeconomic Implications of Artificial Intelligence” Read More: “Artificial Intelligence and the Social Contract”
What Types of AI Adoption Trade-Offs Will Policymakers Need to Manage?
Debates about trade-offs are already playing out within individual sectors, such as financial services, health care, and climate and energy. These debates have raised important questions, including the following:
Will AI increase access to investment and financial services or create new sources of market instability?
The financial services sector stands as a pioneer in AI adoption. For example, AI applications in risk management and fraud detection have already shown promise helping to identify anomalies in transaction data that might indicate fraud and enabling financial institutions to automatically check compliance and provide alerts for potential breaches. In the future, breakthrough AI capabilities could enable instant lending decisions and predictive trading.
But these advances could introduce novel forms of systemic risk and potentially hinder oversight and accountability. Additionally, as financial institutions deploy increasingly sophisticated AI systems, they will need access to vast amounts of sensitive customer information to function effectively. Furthermore, the nascent integration of AI and the diverse landscape of cryptocurrencies and digital assets could disrupt existing financial models, particularly through disintermediation, promoting a need for adaptive regulatory frameworks to manage systemic risks.
Read More: “AI Capabilities and Uptake: Impacts on Healthcare, Financial Services, Climate, and Transportation” Read More: “Artificial Intelligence and Crypto in Financial Services”
Will AI help patients or hurt doctors?
In the health care industry, AI advances have demonstrated enormous potential in such areas as drug development. In health care, the AI-based AlphaFold model developed by Google DeepMind represented a breakthrough in solving the problem of understanding protein and gene structures, and its creators were awarded the Nobel Prize in chemistry. Although no AI-designed pharmaceuticals have yet been shown effective and safe in clinical trials, dozens are in the development pipeline, suggesting that AI could become standard in drug discovery. In addition, existing AI algorithms that are used for the early detection of cancer, heart failure, or sepsis have outperformed humans; however, the adoption of these capabilities has been limited. Growing uptake and trust in AI technologies could enable more early interventions and cost savings, but it is uncertain whether such a shift would help physicians by reducing tasks or cause burnout because of liability and verification responsibilities. There is also limited evidence regarding the safety and effectiveness of many AI applications in health care. Very few AI-based tools are evaluated in randomized trials that measure health impacts, which is the gold standard for both doctors and the public.
Will AI help us manage climate risks and disasters?
AI has the potential to streamline decarbonization and transition planning and improve the efficiency of existing energy systems. AI also has the potential to help accelerate fossil fuel exploration and extraction. This overarching trade-off has coincided with a slower rate of AI adoption in the climate and energy space than in other sectors. Nevertheless, AI is helping consumers improve energy efficiency and lower their costs while allowing energy firms to forecast demand and dynamically source energy supplies based on demand fluctuations. Additionally, researchers are developing tools to enable autonomous grid management. AI is also advancing climate models and weather forecasts to support more-robust climate risk assessments, which raises questions about how financial institutions and insurance and real estate companies will respond. And across every sector, AI is already leading to massive energy demands; these needs have the potential to compete with community and residential needs, leading to higher prices and even energy shortages. At the same time, the need to meet new energy demands could result in higher rates of research, investment, and development that encourage low-carbon energy innovations.
For these sectors and many others, concerns about widespread job displacement, significant invasion of data privacy, cybersecurity failures, bias and increasing social inequity, and misalignment with human values will be a defining feature of the emerging AI landscape. But it is not just individual sector impacts that policymakers will need to confront. Cascading failures across interconnected systems could amplify AI risks because disruptions in one sector can propagate widespread impacts through digital, physical, and social networks. Furthermore, traditional economic metrics (such as gross domestic product) are inadequate for measuring AI’s multidimensional impacts on human well-being, social inequalities, and environmental sustainability.
Read More: “AI Capabilities and Uptake: Impacts on Healthcare, Financial Services, Climate, and Transportation” Read More: “Rethinking Social and Economic Policy in the Age of General-Purpose Artificial Intelligence”
Using Impact Mapping to Prepare Policymakers for Agile AI Responses
Given the potential speed and disruption of AI adoption, policymakers need a way to quickly evaluate and respond to these changes, even as the landscape continues to shift. Scenario-based mapping of AI disruptors can help policymakers assess a wide variety of outcomes (both positive and negative), better understand primary and secondary impacts, recognize barriers, uncover trade-offs, and develop courses of action tailored to specific plausible futures. To navigate policy-relevant questions within and across sectors, we developed the Comprehensive Mapping Protocol for Anticipating and Adapting to Systemic Shocks (COMPASS).
To understand how such an approach might work, consider the following three AI technologies that are either currently being rolled out or are anticipated to roll out in the near future:
In the section below, we present these AI disruptors and some of the policy questions they may raise. The accompanying scenario maps show simple examples of enablers (i.e., policy areas that are preconditions for the disruptor to be successfully integrated at scale), potential concerns (i.e., areas of potential harm to individuals and communities), and impacted areas (i.e., sectors other than the disruptor’s main sector that would likely be affected by the disruptor). Developed individually, such maps can help policymakers better understand the potentially quickly shifting landscape of benefits and costs as AI is adopted by different sectors and respond with needed policy guardrails. When combined, these maps can suport policymakers' assessments of how interactions among scenarios may amplify risks or create new approaches for coordinated policy.
AI-driven trading agents could lower the barrier to entry for individual investing, potentially expanding the avenues available for people to accumulate wealth. Optimizing the investment infrastructure for machine-to-machine interaction could lower transaction costs and contribute to market stability. In this case, policymakers might need to consider the following questions:
Using advanced algorithms and vast biomedical datasets, AI can accelerate drug development, making new, more-effective drugs available more quickly and at lower costs compared with conventional development processes. As a result, some of the questions policymakers will need to consider include the following:
Microgrids that leverage AI to network DERs (such as rooftop solar) enable real-time management of supply and demand and have the potential to lower energy costs and increase the use of renewable energy sources and the availability of energy. Furthermore, an autonomous microgrid could improve transmission and distribution resilience during grid disturbances. To take advantage of these benefits, policymakers need to consider a variety of issues, such as
RAND researchers’ findings indicate that AI’s impact extends across interconnected sectors and holds the potential to both afford tremendous opportunities and pose risks of cascading failures. Effective policies will need to be strategic, coordinated, and flexible enough to adapt to rapidly evolving technologies and their impacts across systems. To grapple with these big-picture issues, policymakers might need to consider the following questions:
Many of the questions raised in this project are as much about U.S. economic, social, and political systems and the social safety net as they are about AI. Thus, for the Social and Economic Policy Rethink Initiative, what is next is a focus on exactly that: RAND researchers plan to turn their attention to the social safety net in the United States. Stay tuned.
Kekeli Sumah (Digital Designer), Haley Okuley (Digital Designer), Nelson Correia (Developer), and Shawna Templeton (Project Manager)
This publication is part of the RAND visualization series. RAND visualizations present graphical or interactive views of data and information from a published, peer-reviewed product or a body of published work.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.