The Policy Minded Podcast, cover art by Haley Okuley/RAND

How AGI Could Reshape National Security

PodcastAugust 27, 2025

How might the emergence of artificial general intelligence (AGI) reshape global security? RAND experts explore the uncertain future of AGI and identify five major security challenges it could bring—from new “wonder weapons” to sweeping shifts in the balance of global power.

Transcript

Evan Banks

Welcome to Policy Minded, a podcast by RAND. I'm Evan Banks. In 1938, German physicists split the atom. This breakthrough fired the starting pistol for a race to develop the most disruptive military capability in history, the atomic bomb. Could the world be on the cusp of a similar moment today? In this episode, we're discussing the possible emergence of artificial general intelligence, or AGI. AGI is AI that could produce human-level or even superhuman-level intelligence across a range of cognitive tasks. So is AGI a plausible reality? And if it is, then what are the potential security risks? A recent RAND paper explores these questions, and today we're joined by co-authors Jim Mitre and Joel Predd. Jim is Vice President and Director of RAND Global and Emerging Risks, and Joel is Senior Engineer here at RAND.

Jim Mitre

Thanks for having us, Evan.

Joel B. Predd

A real pleasure, thanks for having us.

Evan Banks

Little bit about both of you. Jim, what brought you to studying artificial intelligence and national security?

Jim Mitre

My background's in defense strategy and force planning. I spent most of my career, at least the formative stages of my carrier in the Pentagon, working on those issues. And in 2017, we were working on the 2018 National Defense Strategy, and a couple things came into sharp relief. One is that great power competitors, especially China, were the principal challenge for U.S. National security. But also on the technology side, and we think about technology trends, there was a clear realization that amongst all the different technologies that are emerging, be it hypersonics, directed energy, et cetera, that AI was in a class of its own, and was an important area to focus on. So I actually left the Pentagon after 2018, in part to learn more about AI, and went to a small, early stage technology company, still in the national security space, tried to learn more about how data and AI can be applied for national security use cases, and then went back into the Pentagon to help the Deputy Secretary of Defense stand up the Chief Digital and AI Office, and then come to RAND, and for the past three years, a lot of my research has been at the intersection of AI and national security.

Evan Banks

And what about you, Joel?

Joel B. Predd

I'm an engineer by training. Been around about 20 years with a few stints away, but most of all I've been doing work on national security strategy and policy through that time. And about 18 months or so ago, Jason and Jim came to us with an incredibly important and interesting question, which is truly transformative AI might be right around the corner and what should we do about it? And initially, that was just an interesting question for me, but now it became one that I'm sure we'll get into that is just incredibly vital. What everyone thinks about the prospect of things like artificial general intelligence, it seems to be a crime to arrive at that potential intellectually bankrupt. So it's a real honor and a privilege to be leading with Jim, this team that's trying to develop the intellectual foundations for strategy and those kinds of futures.

Evan Banks

Well, let's get into it. Let's talk exactly about what AGI is. How do you define artificial general intelligence in this paper?

Jim Mitre

So AGI is a term that we use principally because the technologists that are developing the technology refer to it as such. And a reason people can debate what it means on the margins, but at its core what we understand it as and what guided our thinking for this paper is that AGI human level cognition across a wide range of cognitive tasks and potentially up to superhuman level cognition or artificial super intelligence across a wide range of cognitive tasks.

Evan Banks

When I hear that my first reaction is, are we really there yet? Is that really something that could be right around the corner?

Jim Mitre

We're not there yet. It certainly could be around the corner. What I'd offer on this is a couple of things. One, industry is pouring eye water amounts of money into pursuing artificial general intelligence. Two, AI is exceeding expectations in terms of its performance consistently and at a faster rate of performance. And three, A lot of the experts who are at the forefront of developing the technology are saying that it's potentially right around the corner. Now, this is a hotly debated question amongst the technical experts, but if you are really wading into the debate, it's largely about timelines. Is AGI potentially coming in two years or 20 years? The debate isn't as much over whether it's technically possible. And so because of those three factors, we've concluded that, in fact, the potential emergence of AGI is plausible. Like I honestly don't know whether or not it will come about, but there's enough evidence and reason to believe that it is plausible such that it's appropriate for us to plan for it.

If AGI emerges in the near term, that's a different scenario than if we've got, you know, two decades to plan for it. But given the uncertainty around it, what we don't wanna do is get caught flat-footed, have it emerge on a faster timeline, and then stumble into that world analytically bankrupt. So our job is to try to think through what is it the U.S. Government should be doing on the path to and in a post-AGI world, independent of when it might actually come about.

Evan Banks

And in this paper, you've come up with five hard problems that if AGI, if and when AGI does emerge, the U.S. Government will need to face and focus on finding solutions to. Why focus on this nexus, in particular national security and AGI?

Joel B. Predd

It's a great question. I think that there are substantive reasons and there are more practical reasons. There are a couple of substantive reasons. The first is, if AGI or even something short of AGI, I think, emerges, there will be very strong implications for national security. And I think we'll unpack the five problems that embody those challenges, but it seems very clear to us that if we assume that truly advanced forms of AI emerge, there will national security implications we're preparing for. The second substantive reason is that, given those problems, quite frankly, we're utterly unprepared. As a country, as a government, one could argue as a society, if not a species, the potential changes are that strong and our preparedness is that low. So there's a need to focus on this to develop the intellectual building blocks, the strategies and the strategic concepts that will allow us to prepare for and transition to these transformative AI futures on whatever timeline they occur. The more practical reason to focus on this intersection is that today there is somewhat of a gulf that we hope is closing through some part of our own efforts, but many others, to bring together communities that are quite disparate. On the one hand, there is the AI community that is advancing the frontier. And then there's the national security community. And these communities are somewhat separate. And I think if we are gonna develop the strategies and strategic concepts and policies and capabilities to navigate this future, those communities will need to be brought together. And so a second, in addition to developing the intellectual foundations, one of our objective is to bring those two communities together.

Evan Banks

So let's get into these five hard problems. Let me read them off for us, and then we'll go through each one. They are: the sudden emergence of wonder weapons, a systemic shift that alters the balance of global power, non-experts using AI to develop weapons of mass destruction, the emergence of artificial entities that have their own agency to threaten global security, and increased strategic instability as countries race to reach AGI. First, let's start with the wonder weapons. I'm sure our listeners' ears perked up at that phrase. What exactly is a wonder weapon and how does it create a significant military advantage for a nation and where does AGI fit in terms of being a wonder weapons?

Joel B. Predd

It's a great question. We spent a fair amount of time trying to give some rigor to this concept because it, as we've discussed it with many, it has been greeted with some skepticism. The definition of a wonder weapon that we have come to is a military technology with a rapid and discontinuous effect on the military balance. There's a couple terms in there that are worth belaboring. The first one is rapid. We don't have a specific time scale in mind when we mean this, but we're imagining a military technology that isn't constrained by the many factors that usually constrain military innovation. Whether it's building force structure, whether it's designing training or doctrine. We're imagining, a technology that, almost by virtue of its mere existence, introduces a shift in the balance of power. The other part of this definition that is worth emphasizing is the part on military balance. We're not proposing that this wonder weapon on its own, it bestows its owner some kind of overmatch on its own or its ability to dominate all features of world affairs. We're simply proposing that a wonder weapon of like this would create a shift in the balance of power in some ways very quickly. It's worth thinking about specific examples of this and the role that AI or very advanced AI would play. By this definition, many very important military innovations, whether they be the tank or machine guns, the triad, all of those would be very consequential for military power. But by this definition they don't level up to the level of a wonder weapon. They didn't by themselves, by their mere existence, they didn't create a sudden and rapid shift in the balance of power. On the other hand, the very first nuclear weapons, that the timeline from their development to their impact was very short, is an example of a wonder weapon, and we can point to others. Now the role that AI plays in this is that there is a possibility that AI will greatly accelerate the science and engineering process, in which we could be subject to continuous, seemingly continuous technology surprise in scientific enterprises that are giving birth to new military technologies that meet that definition. And we have some specific examples of what that might look like. Jim, I'll pass to you.

Jim Mitre

I'll just build off by saying that the history of military innovation suggests that the potential emergence of a wonder weapon is exceedingly rare, right? For all the reasons that Joel just outlined. But if AI can be a game changer in terms of developing new scientific and technological breakthroughs, we should be mindful of the fact that it could lead to a new technological pathway to a wonderweapon. And so the example I like to refer to here is that in 1938, physicists in Germany split the atom. And physicists around the world, to include Oppenheimer, Einstein, others, noticed that, hey, if you can split the atom, there's a clear technological pathway to a wonder weapon, the nuclear bomb. It was a causal chain reaction from splitting the atom that would allow you to have a massive explosion. And so, AGI in itself isn't going to be a wonder weapon, but AGI might enable some sort of scientific technological breakthrough that the right people who are paying attention to it might therefore see a technical pathway to a wonder weapons. Like all of a sudden, if we can do this, then hey, let's just follow that logic train out. That means that we can that. And so that's the dynamic that we're trying to surface here and say, we need to be paying attention to this. From a national security perspective, we don't want Team America to necessarily be in a position where we're the first through the gate on a technological and scientific discovery, but then miss the logical chain that leads to a wonder weapon and aren't mindful of that and taking advantage of that opportunity.

Evan Banks

So my, you know, my first thought was wonder weapon, okay, atomic bomb, totally destabilizes, upsets the global balance of power. We don't want this to lead to a scenario like that, obviously. So how do you stop something that can be used in almost every field?

Joel B. Predd

I think that one of the things you're pointing to with that question is, what do we do about it? And I think while we don't have the final answer to it, it does seem like there's two elements. The first is we would all benefit from more situational awareness of these. We would prefer to anticipate these surprises rather than not. And so we need a substantial effort both to bring military operators in firsthand with models, frontier models, ideally before their release, experimenting with them, not just military efforts, but also scientists so that we can foresee the splitting of the atom moments like Jim was analogizing a moment ago. So that's the first part, having some means to anticipate them. But the second part we think is as important is that we need a strategic concept for deterrence and stability. That is robust to these kinds of surprises. And this is an important area of our research.

Evan Banks

And going immediately to the atomic bomb is the obvious one, but the tank was a big game-changer. Not right out of the gate, but definitely something that changed the face of warfare and combat over the years, and AGI could work like that as well.

Jim Mitre

That's a good segue to the second problem, which is about systemic shifts in the military balance in ways in which AGI could affect the fundamental building blocks of military competitions. It could lead to an advantage for one state relative to another on things like hider versus finder dynamics, on mass precision, on centralized versus decentralized control, et cetera, similar to the way that tanks or machine guns and other historical examples have through the adoption at scale within a military led to a significant advantage. Joel, do you wanna talk a little bit more about systemic shifts and how we should be thinking about that?

Joel B. Predd

Sure, there's a recent paper that's out by our colleagues, Zach Burdette, Jacob Heim, and a few others, that have begun to explore what Jim is referring to. That is, how truly advanced AI may affect some of these building block competitions. And it raises a number of interesting hypotheses, some of which resonate with the current debate on AI and autonomous weapons, but some of them not. We could get into it. One, there's a question of whether it would advantage mass over capability, and we have a hypothesis that it would advantage mass. There's a questions of whether it would advantage deception over counter deception or hiders over finders, and it depends on the context, but there's good reason that our colleague Edward Geist has pointed to that suggests in some circumstances, important circumstances, it may advantage deception or hiders. There's a question of whether it would advantage offense or defense in cyber. And for some economic reasons, we have our colleague Chad Heitzenrater suggest that it will advantage the defense because its ability to change the economics, or disrupt the economics of cyber defense. But crucially, only if the defense takes advantage of it. And if not, or in transition to that future, it could very well advantage offense against opponents that don't seize AI to bolster their cyber defenses. And there's a question of whether it'll advantage more centralized mission command, centralized or more mission command models of command and control. And there is a hypothesis that it might advantage more decentralized ones, and we can get into that. But I want to say that the systemic shift problem is actually pointing to more than just military. It offers the potential to shift other instruments of power, economic, countries that embrace AI, could experience very rapid, potentially explosive economic growth versus not. Countries that embrace AI in their scientific enterprises could see innovations on a pace or on a scale that are quite different from those that don't. It could affect intelligence instruments of power. Intelligence communities that embrace the power of AI could see great advantages over those that don't. The key point here is that these shifts could be systematic. And in contrast to the wonder weapon problem, they might not be overnight, but they also might not intergenerational. Over a period of three, four, five years, maybe a decade, between the time that transformational AI is invented and the time they're embraced, we could see shifts in the building blocks of national power and competitiveness.

Evan Banks

Which is incredibly fast in terms of a shift in the balance of global power still.

Jim Mitre

I was going to just go a little bit deeper on the defense example just to help illustrate this point in terms of how AGI might be wildly disruptive in the future character of war. Right now the Department of Defense is very much focused on applying AI to integrate data and that is incredibly important. What they're trying to do is bring together all these different data streams that we're from speech-to-speech sensors, air-based sensors, ground, etc. Collecting on the disposition of allied and enemy forces on their logistics and supplies and things like that, bring it all into what's often referred to as a single pane of glass. So you can look in one spot and see all this information and see it at various echelons, from very like tactical up to an operational level and see what's happening in the battlefield. Many refer to that as AI being helpful to lift the fog of war. And that's a great direction to apply the technology. But so too could it help poison that information picture, create a lot of false signals that confuse sensors. So the sensors might work as intended, but the information they're collecting on isn't representative of reality. And so if it's applied in this way, it puts operators, put commanders in a bind, because now they might not have as much confidence that the information they're seeing and the picture they're getting is actually correct. And so that's the kind of dynamics like is it going to is it gonna advantage the effort to lift the fog of war or is it actually going to create more confusion and create what Edward Geist refers to as fog-of-war machines. So we're doing a lot of research in this space I think it's a really important area to unpack but I just wanted to illustrate that point in terms of what the crux of the dynamic is here.

Evan Banks

You've anticipated my next question, which is to ask for examples of sort of each of those things. So mass versus precision would be something like a mass-manufactured drone swarm versus precision would be like a laser-guided missile. Seekers versus finders would be like stealth technology versus remote sensing capabilities and radar and things like that. Correct?

Joel B. Predd

That is correct. One of the things I find interesting about our work on the mass versus capability argument, which I credit to Jacob Heim for, is that in many ways this resonates with contemporary experience about the proliferation of autonomous systems that we see in Ukraine and is a feature of defense strategy for some time and increasingly so. One of the things that's particularly interesting about alignment with our findings on this is that we're pointing to at it from the perspective of like foundational shifts emerging from the emergence of truly transformational AI. So it's a nice convergence of reasons, whether you come at this from an autonomous system perspective or just observing what's going on the modern battlefield. Or you're worried about the longer-term trends in AI, both are pointing toward potentially the advantage of mass.

Evan Banks

Let's move on to the third problem, the third big problem of AGI, which is that it's not just state actors that would have, potentially have access to AGI. Non-experts could develop weapons of mass destruction enabled by AGI, what would that look like?

Jim Mitre

The problem that you're referring to here is that of AGI potentially serving as a malicious mentor, if you will, and pulling together a wide range of knowledge on technical issues like how to build a bio weapon or how to create more violent cyber malware and broadening the pool of people who can understand how to do those things. So making what was previously very complicated technical information, only accessible to the most sophisticated experts in a field, a little bit more accessible to people that are not necessarily at the top of the field. That's the dynamic. And it's a problem in its own right as AI gets more capable at distilling complex technical information into simple terms for laymen to understand. But it's aggravated by the fact that there's barriers that are being lowered in adjacent fields. And so it's getting easier to be able to access the ingredients to build a pathogen. It's getting easier to go online and start doing coding through vibe coding and things like that. So as these adjacent fields are also further democratizing the technology across the the various stages of what one would need to build a weapon, how AI's effort to increase the pool of people with access to relevant information is an acute problem. Now, we're doing a lot of research on this here at RAND and what I'd point to is a really important study on red teaming frontier AI models that Chris Mouton, Caleb Lucas, and Ella Guest led came out about a year and a half ago now, but it was one of the first big studies to actually evaluate whether or not current generation large language models actually provided uplift, if you will, to folks that were uneducated on how to conduct sophisticated biological weapons. And essentially what they did is they looked at different teams and evaluated whether not the use of a large language model provided a relative increase in knowledge compared to just searching the open internet. And at the time, a year and a half ago, the answer was no. And that's a good answer, right? That the current generation models didn't provide that kind of uplift. But there was two important notes there that we've been seized with. One is that was current generation–models continuously get better. And so we've been focused on this and doing more evaluations since then to keep our eye on it. The second thing is frontier AI models are a relatively new tool for people to work with and people are getting more sophisticated in terms of how to wield this tool. And so we need to be mindful both of what the model is capable of, but also how humans are able to interact with the model and find ways to extract information that they're interested in getting out of it. So those are the two trends that we're keeping an eye on.

Joel B. Predd

One of the open questions, in addition to like maintaining visibility on this, as Jim is pointing to, we also need desperately a strategic concept for dealing with this problem set. There is an argument that we should think about this in the framework of non- or counter-proliferation in a way that may be analogous in some senses to the way we thought about problems in nuclear security and nuclear proliferation. But given the general purpose nature of frontier AI and the fact that the costs to it may be lower. It may be that there are fundamental limits to a non or counter proliferation strategy. And we have to rely on a different mix and think about other measures and forms of collaboration internationally to manage this risk of misuse by AGI or AI-empowered non-experts.

Evan Banks

Both really good answers, thanks. The fourth problem you identified is maybe my favorite. The possibility that AGI might manifest as, quote, an artificial entity with agency to threaten global security. In other words, AI undermining the agency of human beings. And I kind of think of this, I'm a fan of the old System Shock video games. So this is sort of like SHODAN, unleashed AI running rampant and kind of...taking on a mind of its own, although that's not necessarily the scenario that you're envisioning here, right?

Joel B. Predd

What a great question. This is a problem that carries a lot of baggage. And I hope one of our contributions here is to bring it into the realm of normal strategic and forced planning. One of the abnormal, you might say, features of artificial intelligence is its potential to gain some kind of agency. And many of the frontier labs, as you probably know, are currently investing and see much of their business model on not just increasing the intelligent, but giving it agentic properties. And when this AGI emerges with agentic properties, if its interests can't be aligned with its owner operator or with humanity or we can't control it, it does have the potential to act out on its own. It would have the potentially act on its own as some kind of an actor. Now there are forms of this that get a lot of attention and frankly I think should be taken seriously in which an agentic AI emerges and poses a threat that is you might say catastrophic, if not extensional to the human race. But there are so many, many other ways agentic AI could wreak havoc in our digital and critical infrastructure. Would be quite costly for our economy, at home, but also globally, and damaging for our security that don't have to go all the way up to the most catastrophic or existential forms. And our colleagues Ben Boudreaux and Nidhi Kalra and Mike Vermeer are thinking about what some of those other scenarios would be that are a basis for a broader appreciation for how agentic AI might run amok and be difficult to differentiate from more human actors that are employing AI in the same way.

Evan Banks

One of the examples you give is an AI that's tuned to make energy distribution networks more efficient, and it just institutes rolling blackouts. Bam. More efficiency. There you go.

Jim Mitre

When it comes to the issue of agency, this is one of the ways in which this technology, AI, presents a unique challenge relative to other technologies. It's not just a tool in the hands of humans. It also can make decisions on its own. And that's just a different type of dynamic. So what does that mean? Well, as the systems are getting more and more capable, what we're seeing is they're breaking out of the box, if you will. They're able to engage in cyberspace on their own. With embodied intelligence, they're being put into robots and able to engage in the physical world. So where are these trends taking us? I think that's the question and that's where there's some unique national security problems that can emerge. The issue of an agent that's surfing around the internet that's making decisions with potentially far-reaching consequences, it's not consistent with the intentions of humans, is a distinct problem in its own right. This is something that Yoshua Bengio, one of the godfathers of AI, he's the most cited computer programmer ever. So he's got significant academic credentials here. He talks about this as not something in the realm of science fiction. He says this is actually very much science fact that these systems could exist, engage in rogue behavior or sort of loss of control scenario, that could be highly problematic. And so from a national security perspective, it's like, what do you do about that? Well, there's two different schools of thought in terms of how to approach the problem. One is where a lot of the frontier AI companies are right now is trying to find ways to put guardrails on these systems, to try to ensure that they are aligned with the intentions of human operators, et cetera. And hopefully that will pan out. There's another school of thought that says, hey, that's a worthwhile endeavor, but inevitably it may not pan out or potentially it's infeasible. And so the real trick is finding ways to limit these models' ability to actually engage in cyberspace or engage in the physical world. And so we need to have some clear thresholds and cutoff points in terms of what they can actually do, the connection between the model and certain capabilities or applications. And that's where you want to draw a firm line. And so this is a really important area of research. We're continuing to focus on it. But to your point, this is one of the most interesting problems. I think just intellectually it's a really intriguing problem set because it is a distinct problem from traditional technology related challenges.

Evan Banks

And the whole backstory of human history, it's always been humans making decisions. You wrote that as AI becomes more powerful and ubiquitous, human reliance on it to inform decision-making will increase, blurring the line between human and machine decision-making, potentially undermining the agency of humans, which is, I think, a fascinating line.

Jim Mitre

This concept of agency relates to the degree to which the machines can make decisions on their own right. They can define what objectives they care about and then the ways in which they can go about pursuing those objectives. That's distinct from automated systems which has been around for a long time, like land mines or even missile defense systems that autonomously can engage targets on their own. With the advances in AI and the development of agents, there's something distinct about object definition and creating your own objectives, and then discerning amongst a range of courses of action which one is most sensible to take. I think that's the crux of the matter and how this is differentiated from prior AI.

Joel B. Predd

I just want to add to this that one of the vexing problems here in a geopolitical and national security context is that as we've been running some of these scenarios, these loss of control class of scenarios through a sequence of day after exercises, one of the dynamics happens is that it's sometimes difficult to differentiate or distinguish who's causing some harm, you know, is it a... A rogue AI agent in the system, or is a state actor coming at you, that's employing these things. And being able to differentiate those two may be really critical in a moment of crisis.

Jim Mitre

That's a good setup for the fifth problem.

Evan Banks

Which is instability. The fifth and final problem, instability. You write that even if AGI isn't realized, the pursuit of AGI could lead to strategic instability. So heightened tensions, increased risk of conflict. Can you describe how the AI race itself might create these risks regardless of whether and when someone reaches the finish line?

Joel B. Predd

This problem is in some ways pointing to the way state actors like the United States or China could behave in view of all the other problems. As we're on a path to these kind of AI futures, what actions might state actors, like the U.S. or China, take either to preserve or protect their progress or prevent others' progress. Now we're already taking actions to inhibit progress or protect our own progress. Many of them, I think, very well justified, but it's worth putting them in the context of the stability question. Export controls, various means to protect the security of our frontier AI. But what are the limits of those kinds of steps? And I think we have to at least consider the possibility that staked actors would consider actions up to and including preventive wars. If the stakes of a wonder weapon or systemic shift in power or a threat to us all through a loss of control event, some of those options might be on the table on the path to AGI. Our colleagues Karl Mueller and Zach Burdett and a few others have started to write extensively on these stability dynamics, and we're exploring strategies to preserve stability or promote stability on the pathway to AGI.

Jim Mitre

I'll touch on two aspects of this challenge I think are especially interesting. One along the lines of what Joel just outlined is ways in which nation states might perceive or misperceive what's going on with their arrival. So if you're worried that your arrival is on the verge of an important threshold in terms of developing AGI or some other AI related capability, or from there applying it towards a wonder weapon or some disruptive military application that might create incentives to engage in some form of offensive action. Now that sounds wild at first pass but on the other hand we're already there in a peacetime competition perspective related to export controls. The question is just like how far up the ladder do you go. I don't think as Karl Mueller has pointed out in his research that the use of force to target a data center is likely unless a nation state experiences as an existential threat. But there's a lot between dropping bombs on data centers and expert controls that nation states could consider in these contexts, and so we need to be mindful of that. The other aspect of it is the perception of a president or a military leader's understanding of their own capabilities and how AI can affect that and what that means for instability. And so here, as an example, I'll point to Vladimir Putin's decision in 2022 to invade Ukraine, the massive invasion of Ukraine at that time. Now he's generally considered a relatively risk averse actor. By any objective measure, that was a pretty risky move to invade Ukraine. It's like, well, why did he come around to that conclusion? Well, one theory is that he was just fed poor information, that his own staff was telling him that Russian military forces were highly capable, much more so than they proved to be, and that the Ukrainian will to fight was very low and that they would fold quickly, similar to how things played out in Crimea a few years prior. As government leaders, as military leaders, become dependent on AI to help inform their decision-making. There's a risk that the models engage in some tendencies that we're already seeing related to sycophantic behavior. They start to tell the leaders things that they're predisposed to want to hear. And that could create a perception amongst the military leader that some course of action, which objectively might be high risk, is in fact lower risk. So thinking through those dynamics in terms of how people are gonna become dependent on this technology and ways in which it might shift their decision making is a concerning dynamic as it relates to instability. And so that's one of the points that we raised in a recent paper that Zach Burdette, Karl Mueller, Lily Hoak and I wrote for the Bulletin of the Atomic Sciences in terms how AI might or might not be a spark to the next major war.

Evan Banks

We've already talked about this a little bit, the overarching issue of uncertainty in addition to these five specific problems. There's so much we don't know about whether and how AGI might emerge. How does that uncertainty make it harder to prepare for these problems?

Joel B. Predd

There's three points I would want to make on this uncertainty question. First, I think we have to acknowledge the profound, deep, arguably in some cases unresolvable uncertainty that permeates this. As Jim said at the outset, we think we need to take artificial general intelligence as a technically credible possibility, but timelines, what technical paradigms it emerges within, what are its ultimate implications, these are all sources of uncertainty that don't appear like they're gonna get resolved decisively any time soon. And so this coping with this uncertainty is a characteristic of strategy. We have to figure out a way to deal with it. A second problem with this point I would like to make on this, is it aligns with the sentiment that amateurs weigh probabilities, professionals weigh options. That's a more extreme judgmental take than I would say, but there's something to that. In the end, we're forced to live with this uncertainty and we need to develop strategies anyway. So a large portion of our work is not getting too caught up on probabilities, though they may matter. But thinking about strategies that are robust come what may. And of course, this hearkens back to RAND's history and deep background in various methods of robust decision-making. But I do wanna acknowledge in this third point that in some ways, pointing to uncertainty is a bit of a normalizing crutch. Although we are utterly unprepared for the most transformative forms of artificial intelligence, the U.S. Governments have a lot of practice with coping under uncertainty. Our intelligence community offers confidence assessments. The Department of Defense has multiple planning scenarios. There are various ways in which we have institutionalized, successfully or not, coping with uncertainty. Many of those, it would apply. A feature that is somewhat unique in this is the role of time. And time, rather than knowledge, arguably may be our most precious resource. Whatever you think about the timelines to AGI, the pace of progress is stunning. And there could be quite a jagged frontier as dangerous or disruptive capabilities emerge. And this puts some emphasis not just on coping with uncertainty, but being agile and adaptive come what may in the years ahead in the AI space.

Evan Banks

That's a good segue to talking about potential solutions and potential constructive steps the US can take to prepare for the risks that come with AGI. What are your recommendations for what we could do to prepare this, to prepare for these five potential scenarios and other scenarios that I can't even imagine?

Jim Mitre

In terms of what the U.S. government can do, I think you can put into three broad categories. There's one category which is a set of no regret options. These are just sensible things to do regardless of what happens in the future in terms of the tech development, what capabilities it has, et cetera. Second is a of break glass plans, which is man, if there's some contingency that happens, we are faced with some surprise it might be helpful to have thought this through and have plans in place to react to it. And then a third is a set of bold moves or high regret options we might wanna plan for if conditions warrant, right? It's not so much it's a response to specific contingency but if there is a trend line that we're starting to see and indicators starting to flip, we might want to invest in a certain area or take a decisive action. Given our, as we discussed earlier, the level of uncertainty related to the technology today and whether or not we're on a path to AGI in the near term or the long term, if ever, at this stage, we would just recommend to government a series of no regret options. And there's a variety of things in this category that would be prudent for government to do now. First is about avoiding technological surprise. So in about two hours from the time that we're recording this podcast, OpenAI is about to release GPT-5. There's a lot of attention around this model. This might be a big step forward in terms of what it is. I'm not sure anybody in the U.S. government has any sense of what this model can do.

Evan Banks

That's wild.

Jim Mitre

Well, that's the thing that's interesting about it. I mean, the government's not developing the technology. So it doesn't have the early insight in terms what it can and can't do. And the way these models come out, they don't come out with a user's manual. There's not like, here's how you use this thing. It's just this raw cognition that is just made publicly available. And then people experiment with it and figure out actually what are the applications? Like, how can I use this thing? How good is it at certain tasks? Well, it might be helpful for government not to be behind on understanding what models are coming out and what their capabilities are. This dynamic we're presented with right now is a really tough one from an American competitiveness perspective because we tend to think of ourselves as a leader in AI. And certainly we are in terms of U.S. companies at the forefront of developing the technology relative to Chinese companies or others. But if the models are being publicly available for the U. S. government at the same time they are for the Chinese government and others, that lead has now really been frittered away and the question is just how can government adapt that model for national security applications and there we're in a relative disadvantage compared to the Chinese because they have efforts like civil fusion that are really focused on trying to get the most out of technology and find military applications for it. This administration to their credit is trying to cut a lot of red tape to help accelerate the adoption of technology in the last administration was making prudent steps along this way, but it's just a structural disadvantage for us relative to the Chinese. And so the second thing is to try to break down some of those barriers in terms of finding relevant use cases and adopting it. And the thing here that I think that's most important for the government to consider is just get the technology in the hands of the operators. Let them experiment it, play with it, so people who understand how it could relate to their workflows understand the unique context if they're doing cyber operations and bio and what have you, in terms of specific domains, that you've got the right experts experimenting with the technology and finding relative applications for it.

Joel B. Predd

I would amplify a few points that Jim made. The first is we desperately need a scalable and adaptive relationship between the U.S. government and the frontier AI labs. Scalable and adaptive because the relationship, the needed relationship may change over time. At a minimum, we need, as Jim was pointing to, we need early access to these models so that we can test them, experiment with them, and evaluate them. But the specific form as events evolve, the resources that they need and that the United States government needs to navigate the path ahead could change. So we need some framework that allows for a scalable adaptive relationship. There's much more that could be said about this, but just to double down on things Jim said, the intersection between AI and cyber and AI and bio are sufficient national security priorities that you don't need to get to AGI to worry about them. They're increasingly critical. Our collaborator Richard Danzig has a recent paper on this, on the cyber piece. Jim cited earlier some work on the AI bio piece, but those are two urgent areas that needs to be taken. In terms of the broad strategy, I think we have to acknowledge that the pace of progress is just unfortunately not well aligned with the capacity for anticipatory policy in government. And it seems to us that it's more likely that our strategy for preparing for and living in these futures will emerge from a response to events rather than some kind of top-down analytic process. It's not as though RAND or some other research organization is going to invent the mutually assured destruction for AGI or whatever the appropriate new strategic concept would be. And we write a memo and it gets, you know, propagates across the U.S. government for implementation. It's more likely that we'll respond to events. So in that case, we should be prepared to respond to those events. So I think this speaks to a point that Jim was referencing, which is the need for contingency planning that thinks in advance of how we would respond to different events and where we are undertaking some of that kind of those simulations at RAND.

Evan Banks

Well, you've already anticipated my next question, which is what keeps you up at night when it comes to AGI...responding to events. Yeah, that's a good one to keep me up at night.

Joel B. Predd

I mean, I think that one of the reasons, one of values of our five problem framework is that unfortunately they give you five different nightmares. One of the ways we think that that is valuable is not just in inducing nightmares, but it provides some kind of a strategic framework for evaluating alternative strategies and concepts. So oftentimes we find that people implicitly or unconsciously have only one or a subset of those problems in mind when they're devising solutions. And we think that the family of them need to be taken as a kind of yardstick to evaluate strategies. Often because solutions intended to address one of the problems come with costs for others. So if you're racing to accrue some systemic advantage in problem two, you may create instabilities or introduce risks of rogue AI agents. If you're very cautious about stability risks or AI safety and security, you may miss opportunities to gain some advantage, et cetera. All that leads to probably the thing that should keep us most up at night is which is our basic unpreparedness.

Jim Mitre

And I agree wholeheartedly with Joel on that last point. I mean, we have an incredible opportunity here at RAND and, quite frankly, a privilege to get to do in-depth, rigorous research on these topics. So what keeps me up at night is that we actually fail to deliver the important frameworks, recommendations, and strategy, and policy, and plans in terms of how government should grapple with the potential emergence of this technology. Another one is a failure of imagination to anticipate what the technology might be able to do. It's understandable and yet somewhat easy to be reflective and assume that the world is not going to change and that the technology is not gonna be that disruptive. And that very well may be the case. But I don't want us to miss what could be. And find that we're all of a sudden in a new world because of a big shift that technology has helped enabled that we failed to anticipate.

Evan Banks

Before we close, I also want to mention that despite these five hard problems, there were so many things that AGI could do that would be amazing for humanity. There's an optimistic side to this too, right?

Jim Mitre

There's a lot of reasons to be optimistic about the future and the ways in which AI can be a force for good from a national security perspective, absolutely. Now as a group of researchers that are grappling with some of the hard problems AGI presents to national security, what gives me a boost is recognizing that AI is essentially a member of our team. We've got an incredible collection of experts here at RAND that we get to work with and consultants in bright minds outside of RAND that are thinking on this issue and have forged a terrific community that are trying to puzzle this thing through this thing together. But we're not alone, even as a group of humans working on this. We're leveraging the AI tools themselves to shed new light on problems. And that's been really, really beneficial. So as we're worried about just how powerful the cognition can become. And what some of the risks are there, so too is that cognition also in our corner and helping us think through what the sensible thing is that government should be doing.

Evan Banks

Thank you both for your work. I think that's all the time we have for this episode. Jim Mitre and Joel Predd, thanks so much for joining me.

Jim Mitre

Thanks, Evan. It's a real pleasure.

Joel B. Predd

Thanks, Evan.

Evan Banks

You can learn more about the research we discussed on this episode at rand.org/policyminded, or on RAND's geopolitics of AGI substack at geopoliticsagi.substack.com. Thanks again to our guests, Jim Mitre and Joel Predd, and thank you for listening. This episode was produced by Deanna Lee and was recorded by Deanna and me, Evan Banks. Our editor is Harper Rupert. RAND's director of digital outreach is Pete Wilmoth. We'll see you next time on Policy Minded. RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis.

Topics