How AI Is Changing Our Approach to Disasters

Commentary

Aug 27, 2025

AI-Enhanced disaster response concept vector illustration with flood, AI robot head, globe, graph, and exclamation point in a red triangle

Illustration by Visual Generation

Disaster losses are rising, and the stakes are high for reducing risk. Artificial intelligence (AI) promises new ways to spot danger sooner, coordinate relief more quickly, and save lives and property. But AI doesn't just drop neatly into a command center. To matter in practice, it must be shaped to the messy realities of emergency management—and wrestle with the thorny questions that haunt every new technology: Who gets to use it? When should it replace traditional methods? And who makes sure it doesn't go off the rails?

Disasters are a costly problem. Global insured losses from natural catastrophes have grown 5–7 percent per year and are on track to reach $145 billion in 2025. In the United States, 2025 is on track to be one of the costliest ever years on record for disaster losses following the Los Angeles wildfires, Midwest tornadoes, and Mississippi and Texas floods.

The federal government has said it will ask states and localities to share more of the burden of managing disasters, even as state and local governments are under fiscal pressure. Emergency managers are the people charged with preparing for and responding to disasters. They work in government, the private sector, and nonprofits. They are being asked to assist with a range of new missions, including preparing for infrastructure failures, disease outbreaks, terrorism, and even attack from abroad. The hope is that AI can help manage their increasing workload. Can it? Should it?

What Is AI, and What Can It Do?

AI is a broad term. It refers to machines that perform complex tasks that were once thought to be reserved for humans, potentially including making independent decisions. AI can be used to prepare for disasters before they happen, and respond once they occur. Machine learning models can process vast datasets and forecast fires, floods, and hurricanes with greater precision than traditional methods. For example, NASA has used satellite data to forecast wildfire ignition points so that forest managers can take steps to reduce risk. For training, generative AI systems promise to help people, from experienced government managers to community members, take courses tailored to their needs. To better prepare for disasters, digital twins of communities model how earthquakes or floods might affect populations, so that planners can strengthen plans and infrastructure before disaster occurs.

During a disaster response, AI can provide a better picture of a crisis than traditional methods.

During a disaster response, AI can provide a better picture of a crisis than traditional methods. Computer vision models using drone or satellite imagery can assess damage and help locate survivors. After Hurricanes Helene and Milton struck North Carolina and Florida in 2024, the nonprofit GiveDirectly used a Google-developed AI tool to identify areas with high concentrations of storm damage and poverty and send $1,000 in cash relief to affected households. The idea was that targeted direct payments would be faster and more efficient than traditional aid programs.

Robots still in pilot testing have been used in simulated missions to rescue survivors. Drones can measure radiation after a disaster in zones too hazardous for humans. And emergency management agencies are already using natural language processing to translate warnings and alerts into different languages. After a disaster, AI systems can help track fraud and abuse to ensure that aid reaches the people who need it. Health care systems already use AI systems to track injuries and care for long-term follow-up, and the same could be done after disasters.

There are many definitions of AI, but one way to think about the technology is in terms of specific tools. Table 1 shows AI tools, roughly organized by their use before, during, and after disasters. The table lists commercial systems used for general purposes, and examples of current or potential uses in emergency management.

Table 1: AI Tools and Example Uses

Tools Description Examples of Commercial Systems Uses in Emergency & Disaster Management
Predictive Analytics Finds patterns in data and forecasts future outcomes. Salesforce Risk modeling; disease outbreak spread prediction; flood/wildfire spread prediction; dashboards and situational awareness
Generative AI and Natural Language Processing Understands and translates human language and creates new text, images, or video ChatGPT, Claude, DALL·E, Drafting emergency communication templates; creating scenarios for training; Multilingual crisis communication; rumor detection
Robotics & Automation Performs physical tasks with or without human control, including operating vehicles. iRobot Roomba, Da Vinci Surgical System; Boston Dynamics robots; Waymo Search-and-rescue in dangerous areas; supply delivery; debris clearing
Computer Vision Identifies and interprets objects, people, and activities images/video. Google Photos, Clearview AI; Tesla Autopilot Damage assessment via drones/satellites; search-and-rescue; wildfire smoke mapping
Speech Recognition & Generation Converts speech to text and produces human-like speech from text. Siri, Alexa Voice-to-text for field reporting; hands free operations
Recommendation Systems Suggests products, content, or actions based on user behavior. Netflix, Spotify, Amazon Resource allocation; shelter options; individual risk alerts
Fraud Detection & Security Identifies anomalies to call attention to risks. Mastercard AI Security, Darktrace, PayPal Detecting fraud in payments; cybersecurity

The use of AI to manage disasters is in its early days, but the table shows its potential for a range of uses.

How to Implement AI

After a wave of enthusiasm about AI's potential to transform work and economies, some news reports provide caution about how transformative AI will really be. As with other technologies, AI's effects will come down to how it is integrated in organizational routines. If it is difficult to use, costly, provides incorrect output, is subject to bias, or lacks traceability, or the ability to understand why it made certain decisions, then users will lose confidence. AI systems reflect the data they are trained on. To take just one example, prioritizing aid based on property damage will favor wealthier areas. AI systems alone cannot solve ethical and policy challenges.

We reviewed uses of AI in wildfire management, and in emergency management more broadly. We found that organizations that adopted and deployed AI and other emerging technologies took some of these approaches to mitigate the potential negative effects:

  • use of pilot testing, red teaming, or stress testing AI systems to identify points of failure
  • regular monitoring of AI performance, especially relative to the technology or process being replaced
  • providing specific guidance to the AI for specific problems so that it executes narrow tasks well, and iterating to improve performance
  • use of ethical guidelines so that certain decisions are off the table for AI
  • comparison of AI performance to human performance for specific tasks and weighing of the advantages and disadvantages of each to decide where to use AI and where to use humans
  • use of AI for planning and implementation, or where risk to humans is high (as in the most dangerous parts of wildland firefighting)
  • identifying appropriate trade-offs between efficiency and oversight since AIs can operate quickly on a large scale.

Opportunities and Challenges for AI-Enhanced Disaster Management

AI technologies promise to help identify disasters before they begin, and guide planners in reducing risk. They can also find and help save people and property during a disaster, and help make sense of large, unstructured data to guide recovery and planning for the next event.

AI technologies promise to help identify disasters before they begin, and guide planners in reducing risk.

In the short term, using AI well requires overcoming implementation hurdles. In the longer term, using AI well comes back to classic governance questions of deciding who has legitimate authority and how to make collective decisions. If we can make AI do what we want technically, can we agree on what we want? Technical experts call this the problem of alignment, referring to aligning AI models with human values, goals, and intentions. For example, after a hurricane, aid could plausibly first go to the areas with the highest storm surge, the areas with the greatest property damage—which may also be the wealthiest—or the poorest areas with less capital to rebuild. Humans will need to make the value judgments that underlie AI systems to prioritize and deliver aid.

Deciding on the highest-order values and the appropriate training data now could influence AIs of the future. Since AI is not a single but a group of capabilities that are embedded in many different tools in practice and can engage in independent decisions, efforts to ensure AI does what humans want will need to focus on networks and systems, not just a single tool. For example, it is hard to locate responsibility for an AI-based disaster response decision because AI systems are made up of many different tools or “agents” working together. A system of agents might see an area damaged by hurricanes, assess harm based on indicators such as roof damage, assess transportation routes, and provide recommendations for where to prioritize sending resources. Separate AI agents would conduct each of these steps.

Like any tool or outsourced activity, using AI well will require setting up expectations and legal and technical guardrails, and working with stakeholders to make sure the AI does what we want. The private sector is making big investments in the technology, but potential users also need to invest in understanding and planning for how best to use it. Otherwise, we risk repeating an old story with new tools: trusting the map more than the territory, the model more than the messy, human reality it was meant to serve.