Project
Machine Learning and Gene Editing at the Helm of a Societal Evolution
Oct 23, 2023
PodcastOctober 27, 2023
Two revolutionary technologies, machine learning and gene editing, are converging. This could transform how diseases are treated, how crops are grown, how wars are fought, and much more. RAND Europe research leader Sana Zakaria and RAND senior research engineer Timothy Marler join us to discuss this possible future—and the policy approaches needed to prepare for it.
Visit our press room.
Deanna Lee
You're listening to Policy Currents, a weekly podcast from the RAND Corporation. I'm Deanna Lee. Imagine this: it's the year 2055 and medical breakthroughs are making headlines left and right. A woman in New York City has just conceived a baby without the need for a sperm donor. Another unborn child's fatal disease has been cured thanks to a simple drug the mother takes during pregnancy. And there is buzz around new treatments for cancer that have been highly effective in clinical trials and could soon be available to the public. These breakthroughs have the potential to change the lives of millions of people for the better. But the treatments are new and expensive, meaning most people likely won't have access to them. Elsewhere in the world, a war is raging, but it's a different kind of war than we've seen before. One side is using cognitively-enhanced super soldiers who can be active in the field for 72 hours straight without becoming exhausted. Meanwhile, the other side of the fight is threatening to use a biological weapon that can target specific genetic traits. All these developments, good and bad, were made possible by the convergence of genetic editing and artificial intelligence. Now, to be clear, these are hypothetical scenarios. After all, we're talking about 2055, not 2023. But they demonstrate the broad range of possibilities and risks that might arise when two revolutionary technologies like gene editing and AI come together. They also show what can happen when regulation fails to keep up with those advancements. Today we'll be exploring this fascinating topic as it's the focus of a newly published RAND report. I think the title alone is enough to pull you in: AI and Gene Editing at the Helm of a Societal Evolution. And I'm joined today by the authors of that report, Sana Zakaria, a research leader at RAND Europe who studies emerging technologies, and Tim Marler, a senior research engineer at RAND. Sana, is based in the U.K. and Tim is here in the studio. Sana. Tim, welcome.
Tim Marler
Thank you.
Sana Zakaria
Thank you.
Deanna Lee
As I just mentioned, the title of your report certainly makes an impression. But before we get to some of the profound implications of machine learning and gene editing, which we're going to talk a lot about, let's take a step back and just talk about the basics of these technologies and what they are. Can you give us a quick primer on what gene editing and machine learning are? Sana. Do you want to start with gene editing?
Sana Zakaria
Yeah, sure, happy to. So gene editing is a really broad field of study, so it is primarily focused on manipulating genetic material of living organisms. Now there are various tools and techniques out there that can be used to do so. They've been around for many years. And a good thing to note is that a lot of the tools are becoming cheaper, quicker, more accurate. Another term that people might be hearing quite a lot is engineering biology or synthetic biology. Now, this is a subset of gene editing. So this is where engineering principles are applied to manipulate genetic material, to create new organisms or to give new functions to existing organisms. And the technology has been around for a while, has been used in adults for therapies to manage and in cases cure disease like sickle cell anemia. There's trials going on to cure blindness, and they have also been used to enhance nutritional value of crops and make them more climate resilient. And it's been used to create bacteria that can digest industrial waste and produce clean fuel. So the benefits are really obvious around improved health, a greener more resilient economy. The known risks really vary from an equity due to lack of access to technology, further widening the health and wealth gap and potentially use of genetic information to discriminate in mortgage approvals or insurance premiums, or to create novel organisms that can harm humans. So super bacteria, super viruses. The more troubling aspect perhaps, is the risks that are not known to us.
Deanna Lee
Right. And I think we'll kind of talk about some of those those unknowns in a bit. Tim, what about machine learning? Is this another word for AI specific type of AI? Can you tell us a little bit more about it?
Tim Marler
Sure. So machine learning is is a component or an element of AI. And even though it has been a lot around for a long time, I think even today, if you were asked ten people, are you going to get ten different definitions? Broadly speaking, a AI is simulated intelligence stemming from some mathematical equation or model. Some categorize a sort of a branch of computer science or a capability to imitate intelligent human behavior. It has different aspects, as does intelligence in general. So and again, different people might categorize the aspects of AI differently, but generally you've got some form of perception: gathering data, bringing in information. You then have some approach to doing something with that data that's machine learning, analyzing and learning from that data. There's another aspect that's really managing that data. How do you store it? How do you categorize it? And then in some cases, you could say there's an aspect of intelligence that that involves planning actions and potentially there's an aspect of AI that involves human interaction. So machine learning really is a piece of AI. Now, technically, we've had machine learning sort of mathematical models for making decisions or interpreting data ever since we could fit a line through a set of data points on a piece of paper. What has happened lately, though, maybe the last 10, 20 years, you've gotten a concurrent advancement in computer power. Computers are much stronger or faster and you've got an increased amount and amount of and access to data. And that's what's led to just the breadth of applications and power today. I can get into some of those applications if if you'd like, just as a sample.
Deanna Lee
I think our listeners will agree. It does feel like AI has sort of arrived on the scene very quickly, even though, as you said, the capabilities have been there. So let's talk maybe about some of those applications people might be familiar with or have heard of and how machine learning is a part of those.
Tim Marler
Yeah. To be clear, neither machine learning nor AI have have just arrived on the scene. Certainly the headlines have expanded. In some cases. There's a little bit of hype, undue hype. But the applications, the capabilities have really accelerated. But the fundamental capabilities have been around for decades and have been in use for for decades. But now they're being disseminated faster and they're more visible. And I would say to a larger extent, they're in the hands of non-technical users. For example, a Roomba, a vacuum that automatically vacuums my floor. I like to say AI is cleaning my house as we speak. That's one example. Facial recognition software. This speaks to kind of risks and opportunities. There was a story not too long ago about a city in China where they have really advanced facial recognition, public facial recognition software, and the host entered their name into a database of criminals or people who have broken the law. He then went and took a train to the middle of a city and saw how long it took the police to find him. He got off that train, various cameras around the city recognized him, sent a note to the police, and within seven minutes they came to him because they didn't know that this was just part of a show. So that that's a definite application of AI. Chat GPT has definitely just hit the scene, relatively speaking, where you can go on a software system and say write a paper on how to make coffee. And that will happen. That's based on natural language processing and in general AI capabilities. Image generation. There's software now, I could say paint a picture of that cup of coffee or show me a picture of a cup of coffee, as if Da Vinci had painted it. And with some level of accuracy, you can do that. Virtual assistants like Alexa are based on AI and and more autonomous vehicles, of course, use AI. So it really has broad applications. It's it's touching many business sectors, you know, across communities. But these applications, of course, come with risks. And we can we can touch on some of those as well if it's appropriate.
Deanna Lee
Yeah, that'd be great. If you could just give us a quick overview of some of the risks. Your anecdote about the the show and the police kind of tracking someone down based on data from facial recognition software, I think is a good demonstration of one. But what are some of the other sort of broad risks that come with machine learning and AI more generally sort of exploding in the way it is?
Tim Marler
Sure. One thing to keep in mind is that AI capabilities and machine learning specifically depend critically on the underlying data. There's an expression that genius without education is like silver in the mine. It's the same kind of thing. You can have a phenomenal computer, some kind of great algorithm or model, and if you don't know anything, if you have no information, no data, it doesn't do you much good. So AI is the same kind of thing, it depends on that data. That facial recognition, it was trained based on a dataset of pictures of people. So this is a good a good example of the dual use. And it's it's a good thing if they just caught a bad guy that was was going to go rob somebody, it's a bad thing if they didn't have all the data they needed and you kind of looked like some bad guy out there, they arrest you and you're late for a meeting when you're completely innocent. So there's there's a balance between this. Another example would be this idea of deep fakes, really accurate fake images of people or things. I think there was a case where an AI generated image of an explosion near the Pentagon was linked to a brief dip in stock prices. It was not real. But it looked so real, people thought this happened. You can have what are called inherent biases in in that data that is used to train something. So if you have a bunch of pictures of a certain profession, let's say farmers and all those pictures that train machine learning happen to have certain characteristics, perhaps look a certain way, a certain skin tone. Then you're going to have a machine algorithm that says farmers always kind of look this way, but they don't. It's just that's the data it was trained on. So inherent biases. There are arguments that as the use of machine learning expands where it can be of great benefit in relieving humans of sort of rote, repetitive tasks, it might also then detract from our ability to be creative or practice being creative because we'll just depend on that sort of thing. If if you have this brand new thing called car, well, you don't walk as much, you don't get as much exercise. So there are some serious risks here. But again, there's there's a balance. I think also there is it has significant implications on workforce development, on on jobs. Certain jobs will become less, perhaps even unnecessary. But there will be, I think, new jobs in different types of work. So there are a fair amount of risks. I think also, especially in the case of AI fear is perhaps too strong, but an abundance of concern. So for non-technical users, even policymakers, if you don't understand it, you fear it. So a good example of this that I always quote, Queen Elizabeth the first didn't want to grant a patent for essentially a knitting machine or a loom in 1589 because it's going to replace jobs, it's going to it's going to hurt those tailors. A car, like I said, is extremely dangerous, but it's also very valuable. AI is the same way and it can be very dangerous and there's substantial risks. And it's incumbent upon us as researchers to inform not just associates, but the general public or non-technical users or policymakers, because then if you really understand it, it can dispel that kind of fear and then it can support sort of more informed, presumably better, better decisions.
Deanna Lee
Okay. So we've talked about each of these technologies. Some of the benefits. Curing diseases is obviously great. Cleaning your house without lifting a finger is great, but the risks are serious and they should be taken seriously. However, these technologies are quite new and they're developing really rapidly, right? so are they being regulated at all and if so, how?
Sana Zakaria
I think coming in from a gene editing perspective. So gene editing has been around for a long time now. It's the tools that are changing. It's the manner in which you apply these things that's changing. I think societal tolerance or perhaps tolerance for risk taking is kind of evolving as well. So gene editing in a nutshell, is under lock and key, right? When it comes to tinkering with human beings, and rightly so. So it's kind of underpinned by this very precautionary principle in mind, it tends to be regulated with that precautionary principle across most countries when it comes to human health. Having said that, 2018 was when there was an international incident where in China a researcher was involved in bringing babies to term. So he had edited embryos to develop resistance to HIV, and then the embryos were brought to term and there was a massive public academic outcry and there were repercussions to that. But I think any time any any of these incidents happen, they kind of take progress back in some respects as well, because there's a lot of fear mongering, there's a whole swathe of outcry against Frankenstein-esque type species being developed. So it does tend to go along the vein that it would be a precautionary principle that governs these things. Having said that, outside of the human health domain regulation is a bit more lax and quite varied. So in the U.S. and China, the rules are perhaps a lot more relaxed when it comes to gene edited crops for agriculture, for commercial consumption, where edited modified crops are okay for people to have in their everyday life. But if you look at Europe, genetically modified crops are a big no no. They're effectively banned from human consumption. And the U.K. has only just recently passed an act that allows it to utilize some of these new emerging gene editing techniques to produce crops and eventually farmed animals for human consumption. But the public acceptability is still divided on that. And I think the really interesting thing, though, with these technologies is that gene editing has been regulated very heavily, strongly for a long time. But the way the technology is advancing is that some of these techniques that are being governed with these legacy frameworks, they no longer fit into the mold of what was defined at the time of regulation. You're calling something gene editing, but actually, is it gene editing? It's really challenging what it means for progress. And I think there are a lot of loopholes emerging as a result as well. So, yes, there's a lot of variation, but I would say gene editing seems to be still quite underpinned by this precautionary view around what we can and cannot do with it.
Deanna Lee
And Tim, what about AI machine learning? I have an inkling that heavy regulation may not be be the case there, but you tell me.
Tim Marler
Yeah, for for sure. Although that's changing. So there has relative to gene editing especially, there's been less regulation. And in fact, as part of the study, we plotted out not only the technical developments, but also policy developments over the last few decades. And you see a distinct difference in the acceleration of AI. And this happens for two reasons. On the technical side, we talked about some of those things. There's a confluence of computer power and data manipulation, acquisition and management. On on the policy side: one, almost from a cultural perspective, I think modifying an algorithm on your laptop to be used to create a plot for a research experiment when you're getting your degree is is a little more innocuous. I mean, who who cares? As opposed to modifying part of a human. Right? So so just culturally, there has been less concern about that. And then in addition, in 1969, what's called the Mansfield Amendment was passed. And this said that any federal or all federal funded research must have a mission-oriented and explicit outcome. So there was a concern about, let's say, academics just chasing their tail. And AI, at least at that time was viewed as not delivering on expectations and promises. And then as a result of that, in the in the 70s and 80s, you have what's referred to as the AI winter. There wasn't as as much work, there wasn't as much regulation or policy. Then relatively recently, that changed and you see a lot more investment in AI and machine learning: DARPA, the U.S. Defense Advanced Research Projects, has a lot of effort in this area. There has very recently been the Chips and Science Act in the U.S., which is focused on semiconductor manufacturing, which fundamentally plays a large role in compute power compute capabilities, which then affects machine learning. Then also you had in 2017, China's AI National Action Plan. They said this is going to be really important and we want to be the leader. So then U.S., EU, U.K. saw that and said we ought to have an action plan too. So some of this planning or vision and related regulation begets regulation. So recently you've seen an increased focus on this. As another note regarding AI machine learning regulation. It it again ties back to the data. And there's a lot now of, I think, very healthy discussion. About what kinds of data should be regulated. Who owns the data about you? Who owns the data that Google has tracking what you like to buy? And who should be able to have access to that? And regulate that. I don't know that we've cracked that nut, but I think those discussions now are starting to occur. One more note. I would say that with I think both of these these capabilities to some extent have been around for a long time now. They've accelerated and now we're playing catch up a little bit with the regulation. So there's a need not to necessarily be reactionary, but to be proactive and thinking about policy and regulation with with both these technologies, let alone the confluence of them all.
Deanna Lee
And that's a great segue, Tim. Let's talk a little bit about how these two technologies do converge. That's a primary focus of the report that you just published. Can you tell us a little bit about how you see gene editing and machine learning coming together? Is this something that is being actively worked on right now? Is it far off in the future and how do you see it happening?
Sana Zakaria
Primarily what we find is that machine learning algorithms are being applied to genomic datasets and that's generating a whole new suite of capabilities. Now, this has been happening for some time, so this is not something, we're not talking about the future, this has been happening for some time to some extent in academia but what has really catalyzed this convergence is private investment, industry investment, investing in these technologies to provide services and solutions to other industries, to government sectors and to academia. And it really feels like it is now actively happening at pace across the sector. And it's really limited to just taking the whole machine learning suite of tools and applying it and bolting it onto their genetic editing platform. And it's almost created a new platform technology, if you will, that you can then take and scale and in a more accurate fashion, do lots of different things across multiple sectors, which I'm sure we'll get into.
Deanna Lee
Sure. So are the main benefits of applying machine learning to the work that's being done in gene editing? Is it is it accuracy? Is it doing things at scale? Is it speed? Is it all those things?
Sana Zakaria
Yeah, absolutely. So it is all of those things. So the the gene editing tools are still the same, but the capacity to utilize them, the capacity to utilize the huge amounts of data that we've amassed over the years has really accelerated. So it's about increase in scale, increase in pace, increase in accuracy, but it's also about this ability to predict and infer that wasn't there before, which is brought on by these machine learning algorithms that are being applied. So for instance, we've sequenced the entire human genome led by the Human Genome Project and similar initiatives. And as a result of that, we have this huge amount of data, but we don't really know how to interpret that to a large degree. We don't know what these sequences ATCG mean. We don't know which ones are responsible for a certain disease. We're still discovering new things every day. But what machine learning has done is that it's improved our ability to predict associations between certain sequences and how they might play out in the human being.
Deanna Lee
Okay. That's a great example, and I think our listeners might be wondering what some other applications of this convergence of gene editing and machine learning might be. What are some future scenarios? Maybe you can give us some good and some bad or some some trade offs that are involved.
Sana Zakaria
Yeah, it's a really interesting question. What's good and what's bad? So let's say machine learning is applied to a dataset, a genetic dataset, and we're able to determine which genes and associated sequences are responsible for what we would classically term as intelligence. Now, notwithstanding any legal frameworks, let's say the technology then allows for those who can afford it to give birth to children who are, let's say, cognitively enhanced. Now, this could be good if everybody has access and everybody now has a higher level of so-called intelligence and allows us to achieve amazing feats in society for public good. But it could also be really bad and put to nefarious use if only certain populations of certain countries have access. And depending on which side of the equation you're on, it could be good to have a competitive edge in the global economy. But if you're on the other side, that might not be so good. Maybe a simpler example is where an algorithm can predict if you're going to die of cancer in your 50s. And this person in their 30s realizes this, they put on a personalized medical plan to prevent this. This is inherently a good thing. Again, if only some people can access this, it could lead to a widening of social racial disparities in health. And if the information is shared with other people and other sectors where people can't access treatment but everybody knows that they're going to die of cancer in their 50s, it could be really detrimental to their mental health, to the social circumstances, what would happen to their insurance premiums, their mortgage options. So I think technology and progress inherently isn't good or bad. I think the way you utilize it, the way you access it and the kind of equity and adoption in society can deem it good or bad.
Deanna Lee
Right. I think when we spoke earlier, you you talked about thinking about it like two sides of a coin. And I think that's a useful way to sort of characterize it.
Sana Zakaria
Yeah, absolutely. I like to kind of think of it as, you know, it's either a treatment or a toxin. So you can create, you can use it for good and you can create treatments or you can use it for bad and you can create a virus that's going to wipe out half the population. I think, you know, that kind of just outlines what's possible, both in terms of benefits and risks. But, you know, that's kind of what we try to grapple with in the study that these capabilities are there or developing. But it's more about how do you how do you ensure that it leans one way and not the other?
Deanna Lee
Absolutely. Tim, anything to add from a specifically from a machine learning perspective on this?
Tim Marler
In terms of what happens is these two technologies come together. I think one one main takeaway from from our study was the necessity just simply backing up a bit to consider both technologies together. There's there's a large effort now with emerging technology to focus on what's going to happen with energy. What's going to happen with quantum, what's going to happen with a AI? I think to really understand thoroughly the risks and opportunities, it's necessary to understand how these technologies interact. We looked at gene editing and machine learning, that will be affected by quantum capabilities down the road. So it's necessary to look at these two things together, I think would be the the main thing I would would add to that. Yeah.
Deanna Lee
Well, and let's talk about gaining that deeper understanding a little bit. What what is the timeline for for this sort of convergence? It seems like it's already happening, but is there any way to know?
Tim Marler
I think there's there's especially long term, it's it's very difficult. Some would argue no way to predict the future, especially given that there's such a poignant or potential impact from the confluence of technologies. So one of the things our study really brought out was the necessity to look across technologies but across countries. So as you consider two, three, 4 or 5 different countries cultures set of capabilities as you consider AI, gene editing, perhaps quantum, to say how these things are going to interact and the impacts of that 40, 50 years from now. I would almost argue it's impossible. So nobody has a crystal ball. I think many of the things that Sana discussed are available now, possible now. And what happens is you have an increase in speed and in precision and accuracy, an increase in the ability to run experiments on your computer as opposed to in the lab. To be sure, there are barriers there challenges that are there, many of them that scientists, researchers and others are focusing on certain computational capabilities with machine learning in particular. There can be, well, there definitely is a challenge when you have especially complex mathematical models that sort of predict what's going to happen based on data, a difficulty in understanding why they predicted that. So if you imagine you've got some model and it's trying to determine 100 different numbers and those numbers affect its result, well, why did that one number become a two as opposed to a 1.8? Really understanding those kinds of nuances can be difficult if you're talking about thousands of variables or even millions. So that work in understanding the so called black box of machine learning is is ongoing. That can lead to a public fear of, gosh, we don't even know how the robot thinks. It's not quite like that, but it is nonetheless a challenge, I would say with all of that and thinking about the future, given that it's very difficult, if not impossible, to predict the future, I think the more prudent question is how do we adapt to changes, not necessarily how do we be reactionary, but how can in our decision making in policymaking, how can that be more nimble? How can that be adaptable? The technology is changing very quickly, but you can imagine U.S. Congress, for example, they're not going to change a law regulating AI in a day. But things can happen in a matter of weeks on the technology side. So I think that's should be the focus. Recognize we can't predict the future. How do we adapt as the tech, as the environment and interactions change?
Deanna Lee
And that's exactly what I want to ask you about next, is if if the goal is to maximize the benefits and minimize the risks of both AI and gene editing and how they come together, is that even possible? Can that be done and and how can it be done?
Sana Zakaria
It is possible. And but it's very complex and it's very challenging. Otherwise, it probably would have been done or we would have made a lot more progress by now, I imagine. But it's a very complex landscape like Tim outlined. And, you know, like Tim says, we can't predict the future. So policy has to be nimble. It has to be adaptable so that we can change our pace with these technologies and some of the kind of broad principles that our study outlines around how can you ensure that policy is sort of developing as technology matures alongside is looking at the principles around sort of this constant iteration and participatory dialog, both with the technical experts as well as the public, because public acceptability is a crucial element of technology, adoption and engagement as well. Also we talk about, you know, again, going back to what Tim said earlier about not necessarily defaulting to fearing the unknown, but that constant, consistent communication with technical practitioners and government officials as well. And I think what we also see is that a combination of hard law and soft law has been quite successful in other sectors as well. So that seems to be the current focus for, for instance, the AI foundation models that have led to Chat GPT-type outfits where frameworks and standards are being tested and they could be a means to meeting legislative requirements. But this is these are all kind of soft law type instruments. So I think those are some kind of key principles I revert to tend to talk through some other more kind of deeper and salient points that we uncovered in our study.
Tim Marler
Sure. Thanks Sana. So as one part of the study, we ran a mini game or tabletop exercises where where experts came in and played the role of U.S., China and EU and said, okay, if here potential future scenarios where things might go right or wrong, what policies as a country would would we enact? And there was all sorts of interesting insights as you try to work through those those scenarios. So main takeaways, though, were, I would say at the top of the list that it's important to try to look through the eyes of different cultures and to game out interactive exchanges. Now, what I mean by that, for example, ethics and morality is an important issue in all these countries. As we look at other countries, we tend to think, their ethics are so different or they don't have any, right? But in fact, ethical issues are a focus point for AI and gene editing. But ethics and morality can be defined, can be categorized differently. And so it's important to understand those different perspectives. Secondly, as part of this this game or exercise, we had each country make a decision, what would you do? They then took that back and said, if that's what they're going to do, this is what I'm going to do. So you kind of game it out. And I think that's, what I know, that's how the real world works. And you have this iterative sequence, so it's not enough to just take a snapshot. You've got to be looking at the if/thens of this. Secondly, coming out of of the study in this exercise was coordination is needed between these countries because the information's disseminating so quickly. And then finally, as we've touched on public dialog between international communities and within international communities, these were some big takeaways. And I said finally, But the other aspect that we've seen is this domino effect that a development in one place on one topic can affect or be affected by a policy in a different place, in a different topic. And that that again, reinforces this need to look at the whole tapestry, the whole network of of players, so to speak.
Deanna Lee
So before we wrap it up, I think now is maybe a good time to take a step back. We've covered a lot of ground in this conversation. So what are the main takeaways that you want our listeners to to walk away from listening to this episode with? What are the, let's say, top three? You can do more than three. What are the top things that. People need to understand about AI and gene editing and kind of what the future holds.
Tim Marler
So I can take a first shot at this. Some of this reiterates what we've said before. One, you've got to consider technologies concurrently, multiple technologies. Two, you've got to consider both the technology and the policy. The tech is often sort of viewed as more exciting, but the governance is just as critical. And that means concurrently considering multiple cultures, multiple countries. Three, I think it's important to consider incentives. It's not a word that the general public uses a lot in talking or reading about tech. But what are the incentives for coordination? Collaboration? What could foster collaboration between US and China on gene editing? It's tough question to answer, but we've got to address that. I think education on all levels of policymakers, of scientists, of non-technical folks, is, in my mind, the top priority. And then then finally, there's a need to view policy and governance through multiple styles, multiple types of policy, whether it be reactionary, proactive, etc.. But Sana, especially on that last point, if there's more to add on, please.
Sana Zakaria
I think I will just conclude by highlighting that the reason why these findings are really important is that they're underlined by the fact that there's a real vacuum of regulation and governance or even conversations when it comes to this convergence of technologies around machine learning and gene editing. And at the moment it seems to be really heavily focused on upstream regulation of machine learning models and how they can be repurposed by non-state actors. And, you know, these conversations really tend to be focused on things like biological weapons and said that two pandemics and these are really important areas, but there is much less focus on the multitude of ways in which machine learning is interacting with this technology and multiple sectors of biology like protein folding, gene editing, engineering new species. And both the promises and the risks that come with that. So I think currently the regulation and the policy conversation is too narrow. It's really focused on damage control, and I think policymakers would benefit from casting the net a bit wider and to look at all of the current state of the art and the emerging state of the art at this intersection to then really develop policies and regulation that's really fit for purpose and it's a bit more measured. So I think just that underlines all of the principles and main takeaways that Tim's just outlined for us.
Deanna Lee
Okay. That's all we have time for today. Sana, Tim, thank you so much for joining us.
Sana Zakaria
Thank you.
Tim Marler
Thank you. This was fun.
Deanna Lee
And thanks to our listeners. There's a lot more to cover on this topic, so if you'd like to learn more, we encourage you to check out the show notes at rand.org/podcast where you can find a link to the research we discussed today. We'll see you next week.