Both of these pictures represent magnitude 4.5+ earthquake events. Both have been processed exactly the same. Would I have known that if I wasn't processing the raw data myself? Absolutely not. To me, and probably many of you, the event on the right just looks like noise. And for all practical purposes I have been told to treat them as such for the purposes of retaining my sanity. To filter and extract relevant data from such an event would probably end up haunting me in my dreams, considering the catalog of 2013 data that I am processing contains roughly 900 events.
I'm sharing this with you because I've been learning some lessons in working with real data. It's messy. It doesn't always cooperate and definitely doesn't look pretty. If my new data were cast as a college student, it would almost certainly be the lazy undergraduate who overslept and sprints out the door to his test without any consideration of his personal appearance. My job for the next week or two is to sort through the OIINK data from the first quarter of 2013 and process the events that look like the event on the left. The reason some of the events look better than others is due a number of factors, most notably the size of the earthquake. Since magnitude is a logarithmic scale, the larger events show up orders of magnitude cleaner than the smaller events. That being said, there are also a number of other factors that can influence how the data is recorded. What was happening near the stations when the event passed through? Was a truck driving by? A mine-explosion happening? There are almost an infinite number of things that could impact how the data is recorded from a specific event. Another question you might having, as I had when looking at data similar to the figure on the right, is how the events are even picked if so much noise exists in the data? I learned that process is actually just a matter of applying standard travel times and the station distances to teleseismic events recorded by the USGS National Earthquake Information Center.
As I mentioned earlier, there are roughly 900 events that I am beginning to sort through. These are all events of magnitude 4.5 or above. From this larger collection I hope to pull out 80-100 events that can be analyzed well for P-wave first arrival times using the same process I described in my first blog post. Most of these events will be magnitude 5.0 or larger, as that seems to be a rough threshold for where the events show up relatively clean. The previous 2012 data that Josh and I analyzed had already been cleaned up by someone else, making it far easier to pick the arrival times. So it was a little bit of a shock when I was sorting through the 2013 data and it seemed like nothing was there. But low and behold, about every 20 events I sort through, I get a beauty. Then I promptly do a little victory dance and then proceed to pick the arrival times. All of this data I'm processing, the 2012 and 2013 data, are going to be used as inputs for my tomography model. By picking the P-wave arrivals by hand, we can calculate the residuals from standard arrival times, helping to improve the accuracy of the model. Combining my work with the 3D Illinois Basin model from the Indiana Geologic Survey that Josh is currently working to translate into a format useful for both of our projects, will hopefully help my tomography model more accurately represent the structure of the region.
That's about all for now, as I prepare to head out into the field for the rest of the week tomorrow to recover data from the OIINK stations that aren't linked via the cellular network. This will provide even more data for me to analyze in the upcoming weeks. I'm looking forward to recovering the raw data and seeing how the finalized 3D basin model will correct the time-travel calculations. Lots of work to be done before the first run of the tomography code in roughly two weeks.
Until next P-wave arrival time,
You must be logged into the CMS to post a comment.