Cascadia by the Numbers
Probabilities… Those pesky things we think we understand, but usually don’t. Take for example earthquake probabilities in Cascadia. This seemingly simple question is not so simple even in a region where we have a lot of data. Since we are unable to predict earthquakes, the best we can do generally is produce forecasts based on either some model of recurrence, or on actual data, or something in between. Models of recurrence have taken big hits recently, with the Sumatra and Tohoku earthquakes essentially terminating a popular long-held model that has been used for decades (Ruff and Kanamori, 1980). That leaves us with probabilities derived from actual data. These are not common because the records of past earthquakes from either the instrumental record, or from paleoseismology, are usually too short to be very useful. But, as luck would have it, Cascadia has one of the longest records available, and so actual data may be used in this case, and may have a reasonable chance of representing reality without major bias. An important question is, is 10,000 years of record and ~43 events long enough? We really don’t know if it is or it isn’t, but it’s what we have. Most other faults around the world have records, if indeed any data at all is available, ranging from 100-4000 years long at best, with a few longer.
So with 10,000 years of record, what are the probabilities? There have been a lot of numbers batted around, particularly in the past month since the New Yorker article came out. Why the different numbers? The short answer is that there are a number of different sources, and also that the numbers vary spatially. The earliest records for Cascadia came from the Washington coast, and these numbers are commonly stated as ~ 10-15% chance in 50 years. This was based on a 3500 year record from Willapa Bay. With the advent of a much longer record using both land and marine paleoseismic data, the probabilities for Washington did not change. This was pure coincidence, because random 3500 year subsample could have given very different numbers. But as luck would have it, they are the same and that’s helpful. The New Yorker article mentioned a “one in three” chance in the next 50 years. This number is based on Cascadia-wide paleoseismology, which shows through a number of both land and marine studies that the recurrence intervals are shorter in southern Cascadia, which appears to have roughly twice the number of events as Washington. One misreading of the Schultz article caused people to believe that the “one in three” applied to all locations in Cascadia, including Seattle, which it does not. It applies to any earthquake that has passed enough criteria to be both recorded in the geologic record and published with peer review in the region. The magnitudes are as low as ~ 8.o, but are not well constrained at all. As such, this number is likely a minimum number, since events at the low end could have been missed, and likely were. Another set of numbers less commonly quoted, are those from the USGS National Seismic Hazard maps, recently updated in 2014. One of the products of these maps is a “probability of exceedance” map. One useful depiction of the hazard for inland cities is the “2% probability of exceedance” for a ground motion level of 0.3g in a Cascadia M9 earthquake. Most of our cities are located > 100 km from the coast so ground motions at that level are pretty high at that range. Despite the small number (a loop in an airplane is ~ 4g), the long duration of a subduction earthquake and high level of URM building stock makes even modest 0.3 g shaking very damaging. But 0.3g represents an extreme event, known as the “2500 year event”, something that repeats only every 2500 years. In Cascadia, that means one of the four largest events out of 43, the biggest of the big. So, a 2% probability of exceeding an extreme event is low, only 2%. Or as a colleague referred to it recently, a 98% chance that it won’t happen in the next 50 years! This sounds reassuring, but it isn’t.
Yet another way to look at the same numbers is to ignore probabilities, and just look at the raw data. Rather than show a confusing plot, I’ll just say it in plain English. The 10,000 year paleoseismic record includes now ~ 43 events, including ~ 23 “smaller” ones in the southern half of Cascadia (~M8-8.7) each pair of events has an interval between them, and of course these have large uncertainties. But in rough terms, we have presently exceeded ~ 75% of those intervals since the last earthquake 315 years ago. What? That sounds like a more alarming number than the ones described above! But it isn’t, it comes from the same data. 50 years from now, we will have exceeded ~ 85% of the past intervals, leaving only 6 that were clearly longer than 365 years. Looking at data in this way is called a failure analysis, the same type used to decide what the warranty should be on a disk drive. Obviously it should expire before lots of them start to fail, and you simply get the data from the repair department to calculate it. A fault is simply a “part” that fails under stress, and with enough data, its failure data can be treated the same way.
Here are a couple of other numbers that might be interesting. In northern Sumatra prior to 2004, many earth scientists, including me, would have assessed the seismic potential of the area as near zero probability of generating an M9 earthquake. The reasons? First, the old Ruff and Kanamori model, using plate age and convergence rate predicted very low chances there. The rate of convergence was thought to be very low (highly oblique, potentially zero convergence), and the plate age is pretty old, both factors a recipe for no significant strain accumulation, and no earthquakes of significance. Art Frankel pointed out that in 2004 a seismic assessment was published (Peterson et al., 2004) that did not use the older models, and considered the historical great earthquakes further south in central Sumatra. So awareness of the problem was on the rise, yet nearly all of the 2004 rupture area was north of their study and those by Sieh and colleagues, and very poorly known. This system failed in a spectacular way (~ Mw 9.15) when the informal probabilities would have been rated very low, and no data existed with which to do any better. Northeast Japan was in much the same boat, and failed with the same near zero consensus probability of an M9 earthquake. Even if we take into account the paleoseismic data (published in 2001 but not considered in the Japanese hazard assessment; Minoura et al., 2001), the probability would have been ~ 45-55% in 2010 for the next 50 years based on ~ 3000 years of record (assuming that the 3000 year record is representative, doubtful). If we use a more typical value for variability over the long term, the number would be even lower, 10-50%. The point is that failure doesn’t occur when the probability numbers hit 100%, it may well occur at much lower values, 50% or less in the case of Japan in 2011. So be careful with stats!