At the conclusion of my second week of the internship, I find myself staring at the computer screen watching the shell script spit out line after line of output -- it’s been running for three days now. This is a good thing. It means that no errors have been encountered yet in converting the data to a format that will eventually be analyzed using the MFAST code. Given that this batch of data takes about a week or so to process, let’s hope that everything goes correctly the first time! You may be wondering what on Earth could be taking this long to process? Let me explain:
From my experience, whenever I have heard a seismologist talk about processing the data for their picked events, they are usually only referring to a couple to several tens of earthquake events, maybe even on the order of a hundred if they’re ambitious -- that’s cute. Without a doubt, my mentor must be going for some kind of record in the seismology hall of fame (which doesn’t exist to my knowledge). We are processing -- brace yourselves -- sixty-one thousand three hundred and sixty-three events (61,363)! That is two orders of magnitude greater than the number of events even the most ambitious seismologists usually tackle in a research project! Then again, I’ve never heard a scientist say “You have too much data.”
When an earthquake occurs, sometimes it is preceded by a couple of foreshocks, and it is always followed by many, many aftershocks -- many, many...many aftershocks. Most people's attention, understandably, would be drawn to main event itself; the big climax of the earthquake story, the epic event that causes all the buildings to sway and bridges to collapse, with power plant explosions and fires and other exciting things (assuming the earthquake is large enough, of course). As implied, these "big events" usually only occur once in the story of an earthquake, and once it has passed most normal people usually lose interest and resume with their day. Seismologists, on the other hand, are not normal people. Oh no, we haven’t seen the end of the story yet! The plot goes on!
If we were to analyze the story of a typical earthquake, we would see a relatively short period of small rumblings, a big jump in activity, and then a much longer period of continued smaller rumblings. However, all earthquakes are unique in some manner or another. For example, some earthquakes exhibit detectable foreshocks, whereas others do not. Or perhaps the earthquake has two or three large events instead of just one. Using this story timeline as an analogy, my dataset focuses on the small rumblings after the big event...except there’s a twist.
On November 5, 2011 a magnitude 5.0 (M5.0) earthquake ruptured the Wilzetta fault in Oklahoma. Less than a day later, a M5.7 earthquake occurred less than 2 km away from the first. On November 8, another M5.0 earthquake occurred. Each earthquake generated thousands of its own aftershocks. In this sequence of events, the first earthquake is classified as a large foreshock, the second the main event, and the third a large aftershock. The story of the Oklahoma earthquake sequence therefore has a large event in each section of the timeline, and each earthquake is thought to have activated a separate portion of the Wilzetta fault system. My dataset consists of the thousands of aftershocks recorded by 47 seismometer stations in the days and months following the first M5.0 earthquake.
You must be logged into the CMS to post a comment.