Artificial General Intelligence and National Security: Q&A with Jim Mitre

Commentary

May 7, 2025

Jim Mitre, vice president and director of RAND Global and Emerging Risks

Jim Mitre, vice president and director of RAND Global and Emerging Risks

Photo by Evan Banks/RAND

A computer with human—or even superhuman—levels of intelligence remains, for now, a what-if. But AI labs around the world are racing to get there. U.S. leaders need to anticipate the day when that what-if becomes “What now?”

A recent RAND paper lays out five hard national security problems that will become very real the moment an artificial general intelligence comes online. Researchers did not try to guess whether that might happen in a few years, in a few decades, or never. They made only one prediction: If we ever get to that point, the consequences will be so profound that the U.S. government needs to take steps now to be ready for them.

RAND vice president and national security expert Jim Mitre wrote the paper with senior engineer Joel Predd.

Mitre was working on Wall Street on 9/11. He abandoned his career in finance and refocused on national security. He cofounded a private terrorism research organization, then moved to the Pentagon. He has since served in several leadership roles in the Department of Defense. His most recent job before coming to RAND was helping the Pentagon establish its Chief Digital and AI Office. He now directs RAND Global and Emerging Risks, a division focused on the most consequential challenges facing human civilization.

“What I worry about,” he said, “is that if artificial general intelligence ever does come about, the U.S. government is not well prepared to handle it. We don't want to stumble into that world. Our job at RAND is to help anticipate what some of the choices are going to be, some of the trade-offs—and to make sure we think through them in advance.”

What do you see as the most plausible scenario for how AI develops over the next five years?

To be honest, I don't know. What we hear from a lot of the technologists working at the forefront of AI is that we might be on the threshold of some significantly more capable model, which they refer to as artificial general intelligence. This is plausible. It may happen—and because it would be of such high consequence if it does, it's prudent to think through what that would mean.

There are people in the tech world who are worried about how capable these models are becoming and sounding the alarm for the U.S. government to grapple with the implications. But they're a little out of their depth once they start weighing in on what that means for national security. On the other hand, there are a lot of people in the national security community who aren't up to speed on where this technology might be going. We wanted to just level-set everybody, to say, 'Look, from our perspective, AGI presents five hard problems for U.S. national security. Any sensible strategy needs to think through the implications and not over-optimize for any one.'

What would be an example of that?

There have been calls for the U.S. government to launch a Manhattan Project–like effort to achieve artificial general intelligence. And if you're focused on ensuring the U.S. has the lead in this technology, that makes perfect sense. But that might spur the Chinese to race us there, which would aggravate global instability. Some people have also called for a moratorium on developing these technologies until we're certain we can control them. That takes care of one problem—a rogue AI getting out of the box. But then you risk enabling China or some other country to race ahead and maybe even weaponize this technology.

Leaders need to have a better sense of what the current state of AI technology is, what are some of the capabilities it could present, and what that means from a national security perspective.

So how should leaders be thinking about this? How do they guard against these risks without stifling innovation?

At a minimum, they need to take steps to avoid technological surprise. They need to have a better sense of what the current state of the technology is, what are some of the capabilities it could present, and what that means from a national security perspective. They also need to get this technology into the hands of operators. For example, frontier AI models right now are really good at computer programming. That raises a natural question: How might this technology impact cyber offense and defense? It would be sensible to have cyber operators working with the most state-of-the-art models, experimenting with them, learning their potential and limitations.

You've been studying these issues for years. Was there any one development or breakthrough that really made you sit up and take notice?

A lightbulb moment? I'm in awe of breakthroughs that happen almost on a monthly basis. But what has most seized my attention about this technology is how creative people are in finding new ways to use it. Unlike 'narrow AI' which is built to solve a specific problem, generative AI is a general-purpose technology that has a range of potential uses. It's a form of pure cognition—and with its own agency. Understanding what possible applications there are for good and ill is endlessly fascinating and terrifying to consider.

As the director of RAND Global and Emerging Risks, what's one risk that more people should be paying attention to right now?

Unfortunately, when it comes to global and emerging risks, business is booming. There's no shortage of risks we need to grapple with. AI is a big one. Synthetic biology is a big one. China in its own right is a risk to global security. It's a cliché in the national security community to say that this moment in time is more dangerous than others. But it certainly does feel that way right now.