Most organizations recognise the need for a strategic risk framework. Such a framework typically identifies and analyzes the key strategic risks faced by the organization, such as competitive, regulatory, technological, demographic, or environmental changes. Adopted at the highest level of the organization, effective strategic risk frameworks drive resource allocation and, consequently, the ability of the organization to achieve its goals.
However, many organizations do not integrate the potential impact of catastrophes into the strategic risk framework. This can result in an organization suffering large unexpected losses from a catastrophe despite investing significant time and energy into a risk management framework. For example, a business might foresee–and mitigate–the entry of a new competitor into their market, but be caught off guard by a major flood that causes equal disruption and loss in value.
To avoid being blind-sided in this way, best practice risk management builds catastrophe risk management into the same framework as strategic (and other) risks. In this way, the full spectrum of risks is measured and managed consistently, and resources are directed at those risks that pose the greatest threat to the organization. Such optimal resource allocation underpins long run organizational success.
What is catastrophic risk? Catastrophic risk is: Stuff happens. Some unexpected, perhaps unexpectable, natural event occurs. Half a world away from its source in southern China, SARS kills 38 people in Toronto; a nuclear reactor at Chernobyl is driven into a state its designers never even imagined, even as its operators disable critical safety features, and it explodes; events in the Middle East cause Britons to blow themselves up on the London Underground.
Strategic risk is also stuff happening, but from a business point of view. An ailing computer manufacturer trounces established consumer electronics firms by producing the killer portable music device, and then follows up with a mobile phone that is both revolutionary and beautiful; tiny car firms constrained by post-war, small island scarcity eliminate waste by worshipping quality, end up reinventing the entire manufacturing process, and brutally upend incumbents; Wall Street’s best and brightest simulate endless market disruption scenarios, except the one that finally happens–no bids and no offers; total paralysis.
Beyond strategic and catastrophe risk, financial and operational risk are equally necessary if less glamorous parts of a fully functional risk framework. Only through the consistent identification, measurement, and management of the full spectrum of risks can an organization ensure that it meets its objectives successfully.
More formally, there are four core concepts in risk: Frequency, severity, correlation, and uncertainty.
An event is frequent if it occurs often. Most catastrophes are, mercifully, infrequent. Historically, there is a severe earthquake (seven or greater on the Richter scale) about once every 25 years in California. Hence, the frequency of big earthquakes in California is 1/25 or about 4% each year.
An event is severe if it causes a lot of damage. For example, according to the US Geological Survey (USGS), between 1900 and 2005 China experienced 13 earthquakes which, in total, killed an estimated 800,000 people. The average severity was 61,000 people.
Most people’s perception of risk focuses on events that are low frequency and high severity such as severe earthquakes, aircraft crashes, and accidents at nuclear power plants. Strategic risk also focuses on low frequency/high severity changes, such as disruptive technologies or new entrants. However, a fuller notion of risk includes two additional concepts: Correlation and uncertainty.
Events are correlated if they tend to happen at the same time and place. For example, the flooding of New Orleans in 2005 was caused by a hurricane; the 1906 earthquake in San Francisco also caused an enormous fire.
Estimates of frequency, severity, and correlation are just that: Estimates. They are usually based on past experience, and as investors know well, past performance offers no guarantees for the future. Similarly, the probabilities, severities, and correlations of events in the future cannot be extrapolated with certainty from history: They are uncertain.
The rarer and more extreme the event, the greater the uncertainty. For example, according to the US National Oceanic and Atmospheric Administration, in the 105 years between 1900 and 2004 there were 25 severe (category four and five) hurricanes in the US. At the end of 2004, you would have estimated the frequency of a severe hurricane at 25/105, or about 24% per year. However, there were four severe hurricanes in 2005 alone. Recalculating the frequency at the end of 2005, you would end up with about 27% per year (29/106). That’s a large difference, and would have a material impact on preparations.
Which estimate is correct? Neither, and both: Uncertainty prohibits “correctness.” Uncertainty is the essence of risk and coping with it is the essence of risk management.
Both catastrophic and strategic risk management are then predicting and managing the consequences of rare, severe, and potentially correlated events under great uncertainty.
Think, Plan, Do
Integrating catastrophe risk into strategic risk management requires a common conceptual framework. Best practice risk management is—always and everywhere—a three step process: Think, plan, do (Figure 1).
Thinking comes first. Before being able to manage risk, risk managers must know how much is acceptable to themselves and their organization, and conversely at what stage to cut any losses.
This risk appetite is not self-evident. It is a philosophical choice, an issue of comfort with the frequency, severity, and correlation of, and uncertainty around, potential events. Different individuals and organizations have different preferences.
Some people enjoy mountain climbing. They are comfortable with the knowledge that they’re holding onto a small crack in a wet rock face with their fingertips and it’s a long way down. Others prefer gardening, their feet firmly planted on the ground, their fingertips on their secateurs and not far from a cup of tea. Similarly, some organizations aspire to blue chip, triple-A solidity, others the rough and tumble of start-ups and venture capital, with the added drama of the San Andreas fault under their feet.
For strategic risk, managers attempt to simplify risk appetite down to how much money an organization is prepared to lose before it cuts its losses and changes objectives. For catastrophes, it is the frequency with which a certain event results in death–the frequency and severity of fatal terrorist attacks in London for example. In some cases it is defined externally. For example, on oil rigs in the North Sea it is defined through legislation. Events that cause death more often than once in 10,000 years are not tolerable, and rig operators must mitigate the risk of any event with worse odds than this.
Planning is next. There are two parts: A strategic plan that matches resources and risks; and a tactical plan that assesses all the major risks identified, and details the response to each one.
The first part is the big picture risk appetite. If, for example, an organization decides that the frequency, severity, and uncertainty of flooding in London is too great, the big picture is that the organization needs to leave London, incurring whatever costs this requires.
The strategic big picture also has to make sense. For example, although low cost airlines need to be cheap, they cannot afford to cut corners on safety. Valujet discovered this when it was forced to ditch its brand following a catastrophic crash in 1996, as did Spanair in 2008. Similarly, although the high command of the US Army Rangers recognizes that they operate in very dangerous environments—occasionally catastrophically so, such as Mogadishu, Somalia—and hence will on occasion lose soldiers, they have adopted a policy of “no man left behind.” This helps to ensure that in combat Rangers are less likely to surrender or retreat, perhaps as a result winning the day. Consequently, airlines spend a lot on safety, and armies spend a lot on search and rescue capabilities.
The next stage is detailed tactical planning. First, identify all the risks, strategic and catastrophic, financial and operational, all the things that might go wrong. Then, assess and compare them to see which ones are the most likely and the most damaging. Finally, figure out what to do, who’s going to do it, and how much that’s going to cost.
California’s state-wide disaster planning process is an excellent template for responding to catastrophes, mostly likely because there’s plenty of opportunity to practice: All manner of major incidents there–earthquakes, tsunamis, floods, wildfires, landslides, oil spills–occur relatively frequently. State law specifies the extent of mutual aid obligations between local communities and requires each community to appoint a state-certified emergency manager. Each emergency manager creates a detailed disaster management and recovery plan for his or her local community, reflecting local issues and needs. These plans are audited by state inspectors and rolled up into a state-wide plan. The plan is input to the state budgeting process in order to obtain the necessary resources.
Critically, risk aversion does not necessarily make you safer. Many people or communities express a low risk appetite but baulk at the expense of reducing their risk to match their risk appetite. They don’t put their money where their mouth is, and instead simply hope that the rare event doesn’t happen. However, in the end, even rare events occur. The results of mismatching risk appetite and resources were devastatingly demonstrated recently as Katrina drowned New Orleans.
Conversely, a large risk appetite is not the same thing as recklessness. Technology venture capital firms quite deliberately “bet the farm” on a few firms in narrow technology domains that they believe will be highly disruptive, and hence profitable. This is high risk for sure, but the extensive deliberation and diligence of the investment and management processes mitigate the risk.
Doing is a combination of activities. Before an event, doing means being prepared. This consists of acquiring and positioning the appropriate equipment, communications systems, and budget; recruiting, training, and rehearsing response teams; and ensuring that both the public and the response teams know what to do and what to not do. After an event, doing means keeping your wits about you while implementing your plan, managing the inevitable unexpected events that crop up, and, to the extent possible, collecting data on the experience.
Once the epidemic has broken out or the earthquake has hit, the key is not to panic. Colin Sharples, a former acrobatic pilot and now the head of training and industry affairs at a British airline, observes that instinctively “your mind freezes for about 10 seconds in an emergency. Then it reboots.” Frozen individuals cannot help themselves or others. To counter this instinct, pilots are required go through a continuous and demanding training programme in flight simulators which “covers all known scenarios, with the more critical ones, for example engine fires, covered every six months. Pilots who do not pass the test have to retrain.”
Most environments where catastrophes are possible have similar training programmes, albeit usually without the fancy simulation hardware. As Davy Gunn of Glencoe Mountain Rescue puts it: “Our training is to climb steep mountains in bad weather, because that’s what we do [when we’re called out].” In addition to providing direct experience of extreme conditions, such training also increases skill levels to the point where difficult activities become routine, even reflexive. Together, the experience and the training allow team members to create some “breathing space” with respect to the immediate danger. This breathing space ensures that team members can play their part and in addition preserve some spare mental capacity to cope with unexpected events.
The importance of this “breathing space” reflex reflects a truth about many extreme situations: They don’t usually start out that way. Rather, a “chain of misfortune” builds up where one bad thing builds on another and the situation turns from bad to critical to catastrophic. First, something bad happens. For example, first a patient reports with novel symptoms and doesn’t respond to treatment. Then they die … then one of their caregivers dies too. Then one of their relatives ends up in hospital with the same symptoms … and so on. A team with “breathing space” can interrupt this chain by solving the problems at source as they arise, allowing them no time to compound. For example, a paranoid and suspicious infectious disease consultant (the best kind) might isolate the patient and implement strict patient/physician contact precautions before the infection was able to spread.
Close the Loop
When the doing is over and situation has returned to normal, risk managers must close the loop and return to thinking. The group has to ask itself “so how did it go?” Using information collected centrally and participants’ own experience, each part of the plan is evaluated against its original intention. This debrief can be formal, or informal, depending on what works best. Sometimes it might even be public, such as the Cullen enquiry into the disastrous Piper Alpha North Sea oil platform fire in 1989 that cost one 165 deaths.
Where performance was bad, the group must question whether the cause was local–training, procedures, and equipment–or strategic–the situation was riskier than the organization wants to tolerate, or is able to afford. These conclusions feed into the next round of thinking and planning.
The main pitfall in the integration of catastrophe risk into strategic risk management is an insufficiently holistic process. Usually this stems from the separation of strategy development, risk management, and in many cases insurance. In many organizations strategy development is the sexiest assignment, and is jealously guarded by its departmental owners. As a result, in some cases strategic plans can be insufficiently informed by risk assessment. Risk management departments often do not help themselves, since they tend to communicate in jargon and equations. Separately–and strangely in this author’s view–insurance is sometimes not part of the risk management organization. Rather, it is part of the finance area, and an obscure part at that. Consequently, decisions on which risks to cover and to what degree can be taken in complete isolation of the organization’s overall risk appetite. Such disjoints between different parts of the risk assessment process lead to a lack of integration, and ultimately to inconsistent treatment of risks and misallocation of scarce resources.
Morgan Stanley was until recently a leading American investment bank. Investment banking is not for the faint-hearted, as it involves taking very large financial risks. Consequently, Morgan Stanley invested very large amounts in financial risk management. In general, this worked well and the firm was mostly profitable through the 1990s.
Managing financial risk was merely par for the course for investment banks though. One of the things that set Morgan Stanley apart from its peers was its assessment of catastrophe risk at one of its major operational hubs: The World Trade Center (WTC) in downtown New York. Their corporate security manager, a decorated former soldier named Rick Rescorla, predicted the 1993 WTC bombing and had been able to convince the firm that such an attack would happen again. The firm had committed to move out at the end of their lease in 2006. On September 11, 2001, Morgan Stanley had 37 hundred employees in the WTC. All but six—one of the Rescorla—got out alive, a direct result of constant practice and calm execution.
The integration of catastrophe risk into the strategic risk framework of the firm saved many lives. Few cases are this dramatic, but the point is the same: Risks are risks, regardless of source. The way we label them is entirely arbitrary. If, because of that labelling, we fail to treat all risks consistently, the consequences can be serious.
Making It Happen
In terms of implementation, there are five key principles. First, integration can only come from the top down. Only an organization’s senior management can both view the full holistic picture and require compliance further down. Second, the integration has to be genuinely “lived” by the senior managers. If employees feel that integration is merely lip service, they will not participate and the experiment will fail. Third, since risk appetites tend to be low with respect to very severe events, the resultant scarcity of events may drive hubris: It hasn’t happened for a while, therefore it probably won’t or can’t happen again. In industrial settings, researchers have observed that the odds of a serious accident increase with the time elapsed since the last one. Avoiding this complacency is critical. Fourth, conversely, is the balance between sounding the alarm and having people respond. The more often an alarm sounds, the more likely it is that individuals will assume it’s just a drill, or faulty, and tune it out. However, if an alarm never sounds, no one will know what to do. There is no specific right answer to either of these points, except the first two: A genuine, heartfelt impetus from the top down. Finally, many risk issues are amenable to sophisticated mathematical and computational treatments. There is a temptation to assume that just because a risk is measured, it is managed. It isn’t.