Preparing for a natural disaster like a hurricane is critical in minimizing damage, but what motivates individuals to listen to warnings and act is largely unexplored territory.

The question intrigued Wharton marketing professor Robert Meyer, co-director of the Risk Management and Decision Processes Center. Over the past five years, Meyer has worked to develop an interactive simulation to study how such factors as news media reports, storm warnings and the level of concern expressed by friends and neighbors prompt people to take steps such as installing shutters to protect windows ahead of a hurricane. That model is described in a working paper titled, “Development and Pilot Testing of a Dynamic Hurricane Simulator for the Laboratory Study of Hurricane Preparedness and Mitigation Decisions.”

By surveying residents impacted by Hurricane Earl in 2010, Meyer was able to validate that the lab simulation accurately reproduced many of the key aspects of real storm responses. “Those surveys produced the same information we got from the simulation. The two were mirror images of each other,” Meyer says. “You can really study how people behave in these extreme events in the virtual world.”

His hurricane laboratory is based on a methodology known as information acceleration (IA). Instead of using surveys to get a sense of consumer behavior, IA uses computers to simulate the learning that individuals go through before making a choice on a product — whether it is reading newspaper stories or talking to friends. First developed at the Massachusetts Institute of Technology in the mid-1990s, IA was useful in figuring out how consumers would adopt new technologies. Meyer saw an opportunity to use a similar approach to learn how external sources of information drove people to prepare — or not prepare — for a hurricane.

Taking on the Role of Meteorologist

Understanding what would prompt people to more effectively get ready for a natural disaster is critical in improving overall disaster preparedness, Meyer says. “We know very little about what triggers decisions” on disaster preparedness and what role different media play in forming risk perceptions. “The National Hurricane Center, for example, wants to be sure it uses the right graphics in giving warnings. And it worries that if people are instead getting their information from friends and family, then our efforts [to convey information about an impending storm] will be wasted.”

Meyer’s first simulation was set up to resemble a game. First, participants were given reading information to catch them up on the scenario: It is September 2012, and they are living in Pompano Beach, Fla., which is facing a looming hurricane named Gabrielle. In the game, participants had a choice as to how to spend their time: They could either go to work, enjoy leisure activities or take steps to protect themselves and their home from the hurricane. The hurricane preparation work included precautions like stocking up on food and water or installing shutters on windows ahead of the storm. Those preparation measures, however, did not earn any points for participants but rather protected a portion of the utility points they had earned through the work and play activities.

The final stage of the simulation described what happened when the hurricane hit. If it was a direct hit, the protection points came in handy. But if it was a miss, participants found they wasted time on precautions they didn’t need. The goal was to have as many points as possible after the hurricane — something that required participants to balance racking up utility points versus spending time earning protection for those points.

To keep it interesting, Meyer designed the model so that precaution activities varied in how much time they consumed. “We had a roulette thing,” he notes. “So if you wanted to put up shutters, the time that took varied. We wanted to make it as realistic as possible because you can’t perfectly predict the amount of time it takes to do something.” Meyer also varied the outcome of the storm — whether it mounted a devastating direct hit on a participant’s home or left it largely unscathed.

Among the challenges of building the system was creating the content to set the scene. Meyer needed to film a series of television spots with a meteorologist reporting on the severity and path of the storm. Originally, he tried to tap a real meteorologist in Florida but found that the local stations were unwilling to let their people film spots about a fake hurricane. So Meyer himself stepped into the role. “It’s okay,” he says of his performance. “But I think I should keep my day job.”

The original pilot was done with 32 staff employees and graduate students at the University of Miami. It was intended to study two questions: how the variation in television graphics impacted preparedness, and how formal storm warnings by the National Hurricane Center translated into action by viewers. When it came to the graphics question, Meyer was interested in whether the preparation by participants changed depending on the use of two different images showing the storm’s likely future path. One image was a simple “cone of uncertainty” that showed a range of areas where the storm might make landfall, without highlighting any one particular location. The second was a “track forecast” where forecasters’ best guess as to where the storm would hit was superimposed on the cone. In addition, he wanted to find out what impact the timing of formal warnings by the National Hurricane Center had on preparation.

In the initial pilot, preparation activities peaked before the formal watches and warnings were issued — an indication that South Floridians were prone to take action well before the onset of a storm. At the same time, the pilot found that the presence of a track forecast on the weather map, which in the simulator ran above Pompano Beach where fictitious the homes stood, caused participants on average to spend less time on preparation. The possible reason: People were less inclined to prepare for the storm once they saw their town was unlikely to get the brunt of it.

Testing It Out on Hurricane Earl

Whether the results of Meyer’s simulation would match what is found in a real storm was unclear. But Meyer had the chance to test this out with regard to one element of his pilot study when Hurricane Earl was threatening the eastern United States in 2010. A telephone survey of 195 North Carolina residents three days before Earl was expected to make landfall found that, just as the model predicted, residents took some precautions well before the storm drew close and formal warnings were posted. The residents had levels of concern that far exceeded what was warranted by the technical forecasts from the National Hurricane Center. But as the storm drew closer, their concern level fell even as the government forecasts grew more worrisome. (Meyer could not test the validity of the finding about how the type of graphic used affected preparation since all residents in North Carolina were exposed to the same graphic.)

Meyer was not completely satisfied with this original version of the simulation. His concern was that the game mindset might induce people to behave a bit differently — perhaps to take more risk to rack up more points than they would in real life. So the latest generation of the simulation dispenses with the game approach and creates a virtual world where someone moves around his or her home to gather information on the storm or make preparations. Not only does the immersion approach more accurately reflect real life, but participants can also move through the experiment in 20 minutes versus 90 minutes for the game model.

In April of 2011, the new simulation was tested out by 387 Florida residents and produced some interesting findings. First, the new simulation found that people did not rely very much on the opinions of friends and neighbors when making preparation decisions. That finding was reinforced by the telephone surveys of residents impacted by Hurricane Earl. Second, Meyer also found that “storm fatigue” has a real impact on how people respond to a new threat. So with one group, he had participants read a series of stories about very destructive storms earlier in the hurricane season, while a second group read clippings about less destructive storms. The group that was bombarded with news about very bad storms actually prepared less in the simulation that followed than the other group. “You have a crowding out effect with disasters,” according to Meyer. “As you have one after another, people care less about the next one.”

The new immersion approach did turn up one finding that conflicted with the original approach. In the immersion model, the group that viewed a graphic showing the most likely path of the hurricane along with a “cone of uncertainty” prepared more — not less — than the group that was only shown the uncertainty cone. Meyer says in this case, the group that lived in the area of that likely path line prepared more than others, but the preparation of those outside the likely path zone didn’t fall. Hence, the overall net effect was that including a center-line forecast helped increase mean levels of preparation over the entire threatened region.

That information could be extremely valuable to the National Hurricane Center as it decides how to convey information on future storms. The simulation approach, Meyer notes, is also of value in upping preparation for other natural disasters. With earthquakes, for example, while there is no advance warning, people still have to make decisions about building a home designed to withstand an earthquake or making improvements to an existing home that would limit damage in an earthquake. So the model could be used to determine what kinds of messages motivate people to make those investments. The U.S. Geological Survey, meanwhile, has inquired about using the model to help it figure out the optimal way to provide warnings and calls for evacuations in southern California in the event of mudslides and debris flows from heavy rains. In addition, a New York utility is looking into using the simulation to train its own employees on how to handle power outages following a storm.

Meyer has plans to hone his simulation further. He wants to study how emotions drive decisions in the event of a disaster. “There is a difference between someone describing a hypothetical situation [of a looming storm] and looking out the window and seeing the real thing,” Meyer points out. “You need to make it realistic and capture the emotions.” He plans to use tools to measure the physiological reaction people have when they are going through the simulation. When people see loops of the satellite images of a storm on television, “we want to know what that does to them physiologically. The emotional reaction to disasters is important.”