atquake

Eathquake musings

Cascadia by the Numbers

with 6 comments

Probabilities… Those pesky things we think we understand, but usually don’t. Take for example earthquake probabilities in Cascadia. This seemingly simple question is not so simple even in a region where we have a lot of data. Since we are unable to predict earthquakes, the best we can do generally is produce forecasts based on either some model of recurrence, or on actual data, or something in between. Models of recurrence have taken big hits recently, with the Sumatra and Tohoku earthquakes essentially terminating a popular long-held model that has been used for decades (Ruff and Kanamori, 1980). That leaves us with probabilities derived from actual data. These are not common because the records of past earthquakes from either the instrumental record, or from paleoseismology, are usually too short to be very useful. But, as luck would have it, Cascadia has one of the longest records available, and so actual data may be used in this case, and may have a reasonable chance of representing reality without major bias. An important question is, is 10,000 years of record and ~43 events long enough?   We really don’t know if it is or it isn’t, but it’s what we have. Most other faults around the world have records, if indeed any data at all is available, ranging from 100-4000 years long at best, with a few longer.

So with 10,000 years of record, what are the probabilities? There have been a lot of numbers batted around, particularly in the past month since the New Yorker article came out. Why the different numbers? The short answer is that there are a number of different sources, and also that the numbers vary spatially.   The earliest records for Cascadia came from the Washington coast, and these numbers are commonly stated as ~ 10-15% chance in 50 years. This was based on a 3500 year record from Willapa Bay.   With the advent of a much longer record using both land and marine paleoseismic data, the probabilities for Washington did not change. This was pure coincidence, because random 3500 year subsample could have given very different numbers. But as luck would have it, they are the same and that’s helpful. The New Yorker article mentioned a “one in three” chance in the next 50 years. This number is based on Cascadia-wide paleoseismology, which shows through a number of both land and marine studies that the recurrence intervals are shorter in southern Cascadia, which appears to have roughly twice the number of events as Washington. One misreading of the Schultz article caused people to believe that the “one in three” applied to all locations in Cascadia, including Seattle, which it does not. It applies to any earthquake that has passed enough criteria to be both recorded in the geologic record and published with peer review in the region. The magnitudes are as low as ~ 8.o, but are not well constrained at all. As such, this number is likely a minimum number, since events at the low end could have been missed, and likely were. Another set of numbers less commonly quoted, are those from the USGS National Seismic Hazard maps, recently updated in 2014. One of the products of these maps is a “probability of exceedance” map.   One useful depiction of the hazard for inland cities is the “2% probability of exceedance” for a ground motion level of 0.3g in a Cascadia M9 earthquake. Most of our cities are located > 100 km from the coast so ground motions at that level are pretty high at that range. Despite the small number (a loop in an airplane is ~ 4g), the long duration of a subduction earthquake and high level of URM building stock makes even modest 0.3 g shaking very damaging. But 0.3g represents an extreme event, known as the “2500 year event”, something that repeats only every 2500 years. In Cascadia, that means one of the four largest events out of 43, the biggest of the big. So, a 2% probability of exceeding an extreme event is low, only 2%. Or as a colleague referred to it recently, a 98% chance that it won’t happen in the next 50 years! This sounds reassuring, but it isn’t.

Yet another way to look at the same numbers is to ignore probabilities, and just look at the raw data.  Rather than show a confusing plot, I’ll just say it in plain English.  The 10,000 year paleoseismic record includes now ~ 43 events, including ~ 23 “smaller” ones in the southern half of Cascadia (~M8-8.7)  each pair of events has an interval between them, and of course these have large uncertainties.  But in rough terms, we have presently exceeded ~ 75% of those intervals since the last earthquake 315 years ago.  What?  That sounds like a more alarming number than the ones described above!  But it isn’t, it comes from the same data.  50 years from now, we will have exceeded ~ 85% of the past intervals, leaving only 6 that were clearly longer than 365 years.  Looking at data in this way is called a failure analysis, the same type used to decide what the warranty should be on a disk drive.  Obviously it should expire before lots of them start to fail, and you simply get the data from the repair department to calculate it.  A fault is simply a “part” that fails under stress, and with enough data, its failure data can be treated the same way.

Here are a couple of other numbers that might be interesting.  In northern Sumatra prior to 2004, many earth scientists, including me, would have assessed the seismic potential of the area as near zero probability of generating an M9 earthquake.  The reasons?  First, the old Ruff and Kanamori model, using plate age and convergence rate predicted very low chances there.  The rate of convergence was thought to be very low (highly oblique, potentially zero convergence), and the plate age is pretty old, both factors a recipe for no significant strain accumulation, and no earthquakes of significance.  Art Frankel pointed out that in 2004 a seismic assessment was published (Peterson et al., 2004) that did not use the older models, and considered the historical great earthquakes further south in central Sumatra.  So awareness of the problem was on the rise, yet nearly all of the 2004 rupture area was north of their study and those by Sieh and colleagues, and very poorly known. This system failed in a spectacular way (~ Mw 9.15) when the informal probabilities would have been rated very low, and no data existed with which to do any better.  Northeast Japan was in much the same boat, and failed with the same near zero consensus probability of an M9 earthquake.  Even if we take into account the paleoseismic data (published in 2001 but not considered in the Japanese hazard assessment; Minoura et al., 2001), the probability would have been ~ 45-55% in 2010 for the next 50 years based on ~ 3000 years of record (assuming that the 3000 year record is representative, doubtful).  If we use a more typical value for variability over the long term, the number would be even lower, 10-50%.  The point is that failure doesn’t occur when the probability numbers hit 100%, it may well occur at much lower values, 50% or less in the case of Japan in 2011.  So be careful with stats!

Advertisements

Written by eqgold

August 18, 2015 at 5:28 am

Posted in Uncategorized

6 Responses

Subscribe to comments with RSS.

  1. So with the 8.3 M just happening off of Chile, will that have any effect on the cascadia fault? It would seem that since we are on the same coast line, (even though we are north by quite a few miles) that a movement that large would have some effect. What are your thoughts?

    Thanks Rich Wolles

    /

    Like

    Richard Wolles

    September 17, 2015 at 12:30 am

    • Well, likely this event was too distant and too small to affect the stress state of the Cascadia fault, so my guess in no. But at the same time, the Chile 8.3 earthquake is a good example of the type of earthquakes we have in southern Cascadia, what we call “Segment D”. These earthquakes generate tsunami ~ 5 m in height, and are poorly recorded in onshore geology, but faintly evident in the offshore stratigraphic record. Roughly half of Cascadia earthquakes may be much like this one.

      Like

      eqgold

      September 28, 2015 at 7:33 pm

  2. I had another question about the 10,000 year record you have referred to. I was wondering how this number of years was calculated? And no I’m not a creationist, I was just curious how it was arrived at, as I had read an article from a 2012 October issue of the Scientific American on carbon dating, which talked about how various geologic, atmospheric and solar processes can influence atmospheric carbon-14 levels.
    The article mentions a Bronk Ramsey, who’s team collected 70-metre core samples from Lake Suigetsu, west of Tokyo and painstakingly counted the layers to come up with a direct record stretching back 52,000 years. Their findings led them to “reset” the carbon clock.

    So I was curious which method you used to calculate the time frame for past earthquakes, counting soil layers, carbon dating or some other process. If the carbon dating can be off by hundreds of years would that not affect where we are in the stream of time for the next major earthquake? Sorry, I’m not a geologist so my questions are probably answered in Geology 101.

    thanks again

    Like

    Richard Wolles

    August 21, 2015 at 11:24 pm

    • Hi Richard,

      The age dating is radiocarbon. We carbon 14 date small microfossils called foraminifera deposited just below each turbidite. The calibration of these ages to calendar years uses the latest calibration curves developed by a large group of specialists, including Southon and many others. The calibration curves are revised every few years as new data come to light. The land curve is mostly based on bristlecone pines in the mountains of Nevada, the longest lived trees on Earth. Then there is a model applied to that to get the offset between land and marine ages, which come from water that has been out of contact with the atmosphere for some time. It’s a bit complicated, and surprising that it works at all, but actually works incredibly well. So the curve we use easily covers the 10,000 years we need. We can reach a bit further back in time with the cores, but at earlier times, lowered sea level means the rivers and the offshore canyons were connected, and we see many many turbidites that are most likely storm related, with earthquakes mixed in. Later than ~ 10,000, most of the offshore sites we use are isolated from direct terrestrial input, and become viable recorders of earthquakes, without the “noise” of storms. The article you mentioned helps to push radiocarbon calibration back to earlier times than could be done before. We could still do the dating, but did not have a calibration curve. So for people dating things in the 20,000-50,000 year range, the ages were much less precise that younger ones.

      Like

      eqgold

      August 22, 2015 at 1:19 am

  3. Has there been any studies that show a correlation between mega thrust earthquakes and volcanic eruptions? I would think there would be a geologic record of it if there was. Would Mt. Hood, Mt. Rainer and the 3 Sisters be affected by the cascadia fault hitting the 8 or 9 scale? St. Helens blowing it’s top didn’t seem to initiate anything.

    thanks

    Like

    Richard Wolles

    August 21, 2015 at 5:12 am

    • Good question! Right now the earthquake history is in pretty good shape, but the eruptive history of the volcanoes is not good enough for a comparison. In our 2012 paper we proposed that the 14th earthquake back likely happened in the same year as Mt Mazama (~ 7625 years ago), with the volcano going first. In 1960 in Chile, a major eruption occurred 38 hours after the 9.5 earthquake. So the hypothesis that a stress relieving earthquake might influence a volcano or vice versa is alive and well, but awaiting a definitive long-term test. As you point out, it didn’t happen in 1980, so clearly it doesn’t happen every time, but how often is a completely open question.

      Like

      eqgold

      August 21, 2015 at 5:32 am


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: