Much of futures research focuses on identifying trends of one sort or another and defining the key events that might alter the course of the trends. For example, much of my work in forecasting trends in telework deals with defining and quantifying the major factors that influence the growth (or decline) in its acceptance. For the most part forecasting such trends involves estimating (guessing) the parameters in logistic curves. Logistic curves are seen in most growth forecasts such as this one for teleworkers around the world. Often, as in forecasts of the future of microchip performance following Moore’s Law, the growth can be seen as a series of logistic curves, each of which grows to a peak then declines in use. Of course the day-to-day details of growth and decline are much noisier than those smooth curves. The primary factors in such curves are the growth rate, the peak value and the subsequent decline rate. Are you still awake?
The preceding covers the business as usual part of forecasting. As Gordon Moore discovered it’s a fairly straightforward job to estimate technological trends. Also, such trends make it fairly easy to plan for the respective future events. The task can become much more difficult when we inject human actions or those of what we call nature into the mix. Then we can have severe examples of discontinuities. Take some recent history.
Japan is known worldwide for its attendance to earthquake resistant structural design. Many of its buildings have been built to standards that would ensure survival in the strongest likely earthquake. Similarly, much of its coastline includes breakwaters and other structures built to resist the most powerful likely tsunami. Of course in the real world there is always an economic decision that has to be made when a new building or breakwater design needs approval: how much does it cost to build it strong enough to resist an earthquake/tsunami of size X? And how likely is it that size X will occur during the life of the structure?
Here’s where the it couldn’t happen on my watch problem rears its ugly head. The reasoning goes like this. Earthquakes of Richter magnitude 9.0 almost never happen and, even if they do happen the chances are that they will be on known faults on dry land. So let’s confine our preparations for, say, magnitude 8.0 quakes on dry land—like the San Andreas fault in California. Why waste all that money girding up for something that may never happen?
This is the dilemma of planning for the low-probability-high-impact (LPHI) event. It’s the combination of great uncertainty of occurrence with enormous cost of occurrence that gives—or should give—planners sleepless nights. In Japan, apparently, no one planned for a 9.0 offshore quake on a subduction zone with its resultant tsunami; the likelihood of that happening was just too remote for anyone to commit a few extra millions of dollars for structural improvements to buildings or breakwaters. Or to consider the wisdom of relocating a near-shore nuclear facility to something more tsunami-proof.
I should add that, according to the 26 March 2011 Financial Times, seismologist Yukinobu Okamura warned Tokyo Electric Power Company (Tepco) that their design criteria were insufficient to cope with an earthquake and tsunami that had already occurred in the area (in 869). Okamura’s warning was made in June 2009. Tepco ignored the warning.
The recent catastrophes in Japan are not unique by any means. There are more than enough recent LPHI events to go around; Hurricane Katrina and the Gulf oil spill to name two. Such things are very hard to forecast and even harder to assess in terms of economic impact. Add to this the uncertain discount rate on the future (how much are we willing to invest now to avoid paying X at time T) and the dilemma binds more.
Here we are now with 20-20 hindsight. How good is our foresight for future LPHI events? Do you think we should spend more time on this dilemma relative to our energy, pollution and civil unrest futures?