Before Hurricane Hugo swept through Georgia and North and South Carolina in 1989, the insurance industry in the U.S. had never suffered a loss of more than $1 billion from a single disaster. Since then, numerous catastrophes have exceeded that figure. Hurricane Andrew in 1992 caused $15.5 billion in insured losses in southern Florida and Louisiana. Damages from the Northridge earthquake on the Western coast of the U.S. in January 1994 amounted to $12.5 billion.



Residential and commercial development along coastlines and areas that are prone to earthquakes and floods suggest that future insured losses will only grow — a trend that emphasizes, as never before, the need to assess and manage risk on both a national and a global scale. “People today are asking the question, ‘How do we scientifically evaluate catastrophic risk?'” says Howard Kunreuther, co-director of Wharton’s Risk Management and Decision Processes Center. A new book — edited by Kunreuther and Patricia Grossi of Risk Management Solutions — sets out to answer that question. The book is titled, Catastrophe Modeling: A New Approach to Managing Risk.



“Businesses are clearly interested in this topic because they need to know more about the nature of the risks they face, their likelihood of occurrence and the damages that may result,” says Kunreuther. “Insurers are interested because they need to know what premiums to set for different types of risk in the context of their overall risk portfolio; and the government is interested because it needs to know what regulations and standards would be appropriate to lessen risk and reduce losses.” Kunreuther and Grossi’s book analyzes how catastrophe models can be used in these contexts.



The authors pulled off an unusual feat: They asked the three leading companies in the world that do catastrophe modeling to contribute chapters on the role of modeling in rate setting, portfolio management and risk financing. “It was a coup for us,” says Kunreuther, noting the involvement of AIR Worldwide, EQECAT and Risk Management Solutions. “These companies are alone in systematically analyzing risk using data from the best scientists and engineers in the world, and providing information back to insurers, reinsurers and financial institutions.”



Kunreuther had first brought these three companies together in 1996 when the insurance industry overall was still reeling from Hurricane Andrew and the Northridge earthquake. “Insurers simply didn’t know how to deal with risk anymore,” he says. A leading insurer had $4 billion worth of damage and its Florida office was able to avoid bankruptcy only because the parent company bailed it out. “One of the key features of our book is an analysis of ways that insurers can reduce their losses by taking certain preventative steps,” Kunreuther says. “You can’t reduce the probability of these events occurring, but you can lessen the damage that results from them.”



Catastrophe Modeling has a larger audience than the authors initially anticipated, Kunreuther notes. It is relevant not just to insurers, reinsurers and actuaries, but also to “any business or policy maker who is concerned with catastrophes and is looking at ways to reduce risk and obtain financial protection against future losses,” he says, adding that one possibility is the use of new capital market instruments such as catastrophe bonds (insurance-linked securities). And while Catastrophe Modeling focuses primarily on natural disasters, its approach can be applied to other areas — for example to a business risk, environmental risk, or organizational enterprise risk. Indeed, the last chapter extends catastrophe modeling to terrorism, looking at the impact of 9/11 on the insurance industry, the nature of terrorism coverage, and recent developments in terrorism modeling.



EP Curves


If there is any innovation that gets people to think about risks on a broader level, says Kunreuther, it’s the idea of exceedance probability (EP) curves, used by risk managers to quantify their catastrophe risk potential. Catastrophe Modeling describes in detail how EP curves are developed and their importance in managing one’s risk.



For example, in Part IV of the book, Patricia Grossi and several former Wharton students used the catastrophe models to analyze the probability of an earthquake of a particular magnitude occurring over the next year on a certain earthquake fault near Oakland, Calif. Separately, a model can predict the damage from such an event that would result to homes in that area and to the portfolio of a hypothetical insurer. This data forms the building blocks of the EP curve. The same exercise is done for other possible earthquakes and the resulting information further adds to the EP curve’s configuration. The model also looks at the probability of the damage or loss exceeding a certain value. “An insurer could say, ‘We have insured a portfolio of homes in Oakland. What is the likely damage? What is the probability that the loss will be greater than $X million or $Y million?'” says Kunreuther. “Then the insurer prices its policies accordingly. The company could also determine if it has too much coverage, which could end up exposing it to bankruptcy.



“That is the key to everything we do in this book,” Kunreuther notes. “If an insurance firm says it doesn’t want to tolerate more than a 1% probability of having a loss greater than a certain amount, then the insurer has several different ways of reducing that loss. One is reinsurance. A second is catastrophe bonds. A third is mitigation,” or measures taken to reduce or eliminate loss from natural disasters. Such measures can range from retrofitting unreinforced masonry buildings and developing new standards in building codes to giving tax breaks for certain property improvements. “A fourth strategy is to reduce coverage,” Kunreuther continues. “A fifth is raising premiums. These approaches allow insurers to say, ‘Suppose I had this portfolio of risk in a particular area. What would happen to me, and what can I do about it?'”


One of the book’s most critical elements, Kunreuther adds, was the presence of a technical advisory committee of scientific experts who reviewed models constructed by the three modeling companies, including their analysis of potential losses from earthquakes to residential properties in Charleston, S.C. The companies’ identities were kept confidential in order not to reveal the specifics of their estimates, and each company was given a common portfolio of risk to analyze.



Grossi, who was a Wharton PhD student at the time the book was written, then compared these models. The point, says Grossi, was “to discover the variation between results generated by each of the three models.” Grossi expected that the exceedance probability curves produced would most likely be dissimilar, given the degree of uncertainty in generating an EP curve for the Charleston area — a “low” seismic hazard region as compared to California. However, Grossi also noted that “it was important to compare the curves to discover the range in which losses would most likely fall. While each EP curve was valid, comparing the curves makes one appreciate the uncertainty in catastrophe risk and think in terms of a range of loss estimates — rather than a single estimate.”



Scarcity of Historical Loss Information


In Catastrophe Modeling, Grossi, Kunreuther and their colleagues note that governments, individuals and corporations — particularly in well-developed countries — often fail to prepare for major natural disasters such as hurricanes, tornadoes, earthquakes and floods. Policymakers typically are moved to action only after the disaster occurs. Yet there is no sign that natural disasters are going to let up any time soon: In the last eight months alone, hurricanes Charley, Frances, Ivan and Jeanne swept through parts of Florida, New Jersey and Pennsylvania; the Asian tsunami killed approximately 300,000 people in 11 countries; and, most recently, floods in Southern California have washed away homes and roads and cut electrical power to thousands of Los Angeles County residents.



Big disasters, of course, continue to mean big losses. Worldwide, loss figures from natural disasters during the last decade exceeded $40 billion every year but one. In 2004, the economic losses from natural disasters totaled $120 billion with $14 billion directly attributable to the December 26th tsunami in the Indian Ocean.



Because of the growing recognition that disasters can wreak enormous havoc, catastrophe modeling has already gained widespread acceptance by the private and public sectors, the authors note, and is relied upon to support a wide range of risk management strategies. One particular challenge the modelers face “is the scarcity of historical loss information. Unlike auto accidents and fires, which occur frequently and thus provide a basis for actuaries to estimate future losses,” natural catastrophes offer no such abundance of available claims data.  



As the book notes, while the probabilistic approach to catastrophe modeling is the most appropriate, “it requires modeling complex physical phenomena in time and space, compiling detailed databases of building inventories, estimating physical damage to various types of structures, and translating physical damage to monetary loss …” From the modeler’s perspective, the authors indicate, “the task is to simulate, realistically and adequately, the most important aspects of this very complex system.”



After describing the risk assessment process in one section of the book, another section looks at how to link this assessment with insurance — specifically, how insurers can take advantage of the scientific advances in evaluating natural disaster risk to develop strategies for reducing their losses. On one level, this translates into a very simple question concerning rate making: When an insurer decides to provide coverage for a given risk, how much should it charge? But it also raises strategic issues of portfolio management and risk financing.



Catastrophe modeling is able to examine an “appropriate mix of risk management strategies,” the authors write. “An underwriter can link to a company-wide database” to determine what premium it should charge for a new account and also how this risk “correlates with others in the company’s portfolio. The portfolio manager can implement underwriting guidelines to determine what premiums to charge for new policies as a function of their location and potential hazards. Different risk transfer programs can be priced and evaluated in conjunction with an existing portfolio of risk.” The company can then decide “whether to reduce its exposure, raise its premiums, buy a catastrophe bond and/or transfer some risk to a reinsurer.”



Linking Science with Policy


The book is divided into four parts. The first, entitled “Framework for Risk Management Using Catastrophe Models,” looks at the need to manage risk; the private sector stakeholders in risk management (including property owners, insurers, reinsurers, the capital markets, rating agencies and state insurance commissioners); and the government’s role in risk management. Part I also includes an introduction to catastrophe models and insurance, covering the history, structure and uses of such models; and a look at ways of quantifying the likelihood, consequences and insurability of catastrophic risks.



Part II explores natural hazard risk assessment, starting with a chapter on the risk assessment process and the role of catastrophe modeling in dealing with natural hazards. Another chapter considers the nature and impact of uncertainty on catastrophe modeling, and has case studies based on hurricane and earthquake scenarios in Florida and South Carolina.



Part III, entitled “Linking Risk Assessment with Insurance,” includes chapters written by each of the three modeling firms. It covers the use of catastrophe models in insurance rate making, looking at such topics as actuarial principles and the role of regulation. A second chapter focuses on insurance portfolio management, with an emphasis on portfolio composition and catastrophe modeling along with issues regarding portfolio risk. The final chapter in this section explores risk financing, asking the question, “What risks should be financed?” It analyzes risk financing mechanisms, the costs of risk transfer and risk financing schemes.  


Part IV focuses on risk management strategies using catastrophe models by analyzing three model cities — Oakland and Long Beach, Calif. (each facing an earthquake hazard) and Miami, Fla. (subject to hurricanes). The first chapter looks at the impact of mitigation on homeowners and insurers, insurer decision processes, homeowner decision processes and the need for workable public-private partnerships. A subsequent chapter studies the impact of risk transfer instruments by developing a framework for evaluating alternative strategies, such as reinsurance and catastrophe bonds as additional sources of funding.


Notes Kunreuther: “This eight-year project with experts from the private and public sector has been a learning experience for all of us. It has highlighted the importance of trying to quantify risks while at the same time indicating the nature of the uncertainties surrounding these estimates. We view the book as a starting point for improving the risk management process by linking science with policy.”