Managing Industry Influence in U.S. AI Policy

Kevin Wei, Carson Ezell, Nicholas Gabrieli, Chinmay Deshpande

Research SummaryPublished Dec 13, 2024

Key Findings

  • AI companies' policy influence: As of 2024, AI industry actors are attempting to influence U.S. AI policy through many direct and indirect channels, primarily through agenda-setting, advocacy activities, influence in academia and research, and information management. Industry influence could cause regulatory capture when it results in policy outcomes that are detrimental to the public interest.
  • Recommendations for policymakers: To manage industry influence in U.S. AI policy and prevent regulatory capture, policymakers should
    1. invest in building robust civil society institutions, such as with independent funding streams;
    2. consider procedural and institutional safeguards, including robust ethics requirements;
    3. build technical capacity in government and civil society, such as by competitively hiring technical talent;
    4. increase transparency regarding the AI industry’s influence activities; and
    5. raise regulatory visibility and verify industry-originated information through reporting and monitoring requirements.

Actors in the U.S. artificial intelligence (AI) industry have gained extensive influence in conversations about the regulation of general-purpose AI systems. Although industry involvement often contributes to effective policymaking, undue industry influence can lead to regulatory capture in which industry priorities override the public interest. The AI industry's influence on policymaking qualifies as regulatory capture when it meets two criteria:

  1. Policy outcomes occur that favor industry actors and contravene the public interest.
  2. These outcomes result from industry influence.

Even if, as of 2024, AI policy is not "captured," capture might in the future impede effective AI regulation and, instead, advance policies or enforcement practices that are ineffective, unsafe, unjust, or even entirely absent.

RAND researchers examined the potential downsides of industry influence, overviewed the direct and indirect influence channels that the industry may use, and offered suggestions for how policymakers can manage industry influence in AI policy.

What Could the AI Industry Influence?

Industry influence on policy outcomes could undermine the public interest by affecting the following:

  1. Policy content: Industry actors could influence the content, strength, or existence of regulations. In practice, influence can lead to weak regulations that fail to protect the public, the absence of regulations altogether, or regulations that protect AI companies’ market advantages. Interviews conducted with 17 AI policy experts from government, civil society, and academia revealed broad concern that regulatory capture might result in AI regulation that is too weak or in no regulation at all. Only two experts expressed concern that companies might push for regulations that could stifle competition and pointed to high barriers to entry that already exist in the industry, such as compute costs and the early stage of AI product commercialization.
  2. Policy enforcement: Industry actors could push for policies that are weakly enforced, riddled with exceptions, or biased toward certain companies. Given the current absence of AI regulations in the United States, experts expressed less concern about enforcement issues. However, some experts raised concern about the historical lack of regulatory enforcement on the tech industry overall.
  3. Governance structures: Industry actors could influence the structures of institutions that are responsible for setting or enforcing policies. For instance, their influence could result in underfunded institutions, nonuniform rules that allow for regulatory arbitrage, or federal policies that preempt more-effective state policies. No experts indicated that AI companies are presently seeking to create these outcomes.[1]

AI industry actors could exert influence on the policy process directly or indirectly.

How Could the AI Industry Influence Policy?

AI industry actors could exert influence on the policy process directly or indirectly. Industry actors use direct methods when they engage policymakers, including legislatures, regulatory agencies, courts, White House offices, and individual staffers or officials. Direct methods may fall into one of the following four categories (with individual methods in bold):

  1. Personal engagement: Industry actors directly participate in the formal policymaking process, such as through advocacy or procedural obstruction.
  2. Incentive shaping: Incentive shapings include the use of donations, gifts, bribes, private threats, or the revolving door as positive or negative incentives to shape policymakers' decisions.
  3. Information capture: Industry actors shape policymakers' information environment; techniques include agenda-setting, information management, and information overload.
  4. Cultural capture: This can include industry actors’ efforts to form new social dynamics or exploit existing ones with policymakers, such as through group identity, relationship networks, or status.

Industry actors may also seek to influence policymaking through indirect methods. In this strategy, industry actors seek to influence intermediaries, including academia, media, private regulators (such as auditors and standards-setting bodies), and the public, who in turn affect policymakers.

Of the 17 experts interviewed for this research, 15 cited agenda-setting as a key way the AI industry attempts to influence policy by advancing anti-regulation narratives and steering policy conversations toward or away from particular AI-related problems. Thirteen experts mentioned advocacy and indicated that AI companies and trade associations are conducting advocacy activities that target legislators. Figure 1 illustrates the extent to which each method was mentioned by the interviewed experts.

Figure 1. Number of Interviewed Experts Who Discussed the Indicated Methods of Industry Influence on AI Policy

Direct influence

Personal engagement

  • Advocacy: 13
  • Procedural obstruction: 1

Incentive shaping

  • Revolving door: 6
  • Donations, gifts, and bribes: 5
  • Private threats: 0

Information capture

  • Agenda-setting: 15
  • Information management: 9
  • Information overload: 1

Culture capture

  • Status: 7
  • Relationship networks: 4
  • Group identity: 3

Indirect influence

Indirect capture

  • Academic capture: 10
  • Media capture: 7
  • Public relations: 6
  • Private regulator capture: 3

Recommendations to Manage Industry Influence in U.S. AI Policy

The researchers recommend that policymakers focus on the evolving ways the AI industry is influencing policy rather than on the specific policy positions for which industry actors advocate. Specific suggestions include the following:

  • Invest in building robust civil society institutions by ensuring that government, academia, private regulators, and other organizations receive sufficient funding and talent to resist regulatory capture.
  • Adopt procedural and institutional safeguards, such as by empowering watchdogs, conducting independent reviews of regulations, and establishing advisory boards or public advocates.
  • Build technical capacity in government and civil society by promoting access requirements, providing independent funding, and creating public AI infrastructure.
  • Increase transparency of industry influence through the mandatory disclosures of gifts, lobbying meetings, industry events, or other activities.
  • Institute reporting and monitoring requirements to raise regulatory visibility, and verify information originating from the AI industry.

Figure 2 outlines 11 specific ways policymakers can mitigate the AI industry’s influence and identifies the targeted influence methods for each option.

Figure 2. Options for Mitigating Mechanisms of AI Industry Influence, per Influence Method

Direct Influencea

Targets multiple method categories

Build robust civil society institutions

  • Personal Engagement:b Advocacy
  • Incentive Shaping: Donations, gifts, bribes

Increase transparency requirements for corporate influence

  • Personal Engagement:b Advocacy
  • Incentive Shaping: Donations, gifts, bribes

Targets incentive shaping

Fund and provide AI-specific training for government ethics office.

  • Incentive Shaping:c Revolving door

Make government careers more desirable (via salaries, professional development, etc.).

  • Incentive Shaping:c Revolving door

Strengthen and enforce government ethics policies, such as conflict-of-interest reviews.

  • Incentive Shaping:c Revolving door

Targets information capture

Consider consumer empowerment programs to enable civic participation.

  • Information Capture: Agenda setting
  • Information Capture: Information management
  • Information Capture: Information overload

Increase nonindustry stakeholder access to policy processes (especially early stages of policy development).

  • Information Capture: Agenda setting
  • Information Capture: Information management
  • Information Capture: Information overload

Institute reporting and monitoring requirements to raise regulatory visibility and verify industry information.

  • Information Capture: Agenda setting
  • Information Capture: Information management
  • Information Capture: Information overload

Indirect Influence

Targets multiple method categories

Build robust civil society institutions

  • Indirect Capture: Public relations
  • Indirect Capture: Media capture

Increase transparency requirements for corporate influence

  • Indirect Capture: Public relations
  • Indirect Capture: Media capture

Targets indirect capture

Provide funding sources independent of industry.

  • Indirect Capture: Academic capture
  • Indirect Capture: Private regulator capture

Ensure academic access to compute and data resources.

  • Indirect Capture: Academic capture

Increase nonindustry career opportunities.

  • Indirect Capture: Academic capture

aNot pictured: Cultural capture category (group identity, relationship networks, status) (more research needed)

bNot pictured: Procedural obstruction (more research needed)

cNot pictured: Private threat (does not apply to the United States)

Note

Cover: Managing Industry Influence in U.S. AI Policy

Available for Download

Topics

Document Details

Citation

Chicago Manual of Style

Wei, Kevin, Carson Ezell, Nicholas Gabrieli, and Chinmay Deshpande, Managing Industry Influence in U.S. AI Policy. Santa Monica, CA: RAND Corporation, 2024. https://www.rand.org/pubs/research_briefs/RBA3679-1.html.
BibTeX RIS

Research conducted by

This publication is part of the RAND research brief series. Research briefs present policy-oriented summaries of individual published, peer-reviewed documents or of a body of published work.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.