This project is looking at locating magmatically induced, brittle-failure earthquakes within the lithospheric mantle beneath the Harrat Rahat volcanic field in northwestern Saudi Arabia. Brittle-failure earthquakes are quite small (magnitudes <2), but display distinct, repeating signals which have been observed in other Saudi Arabian volcanic fields. We will be using the Fingerprint and Similarity Thresholding (FAST) algorithm developed at Stanford to search large datasets from the area in a relatively small amount of time as compared to other methods which are used for similarity searching in seismology. The outcomes of this research will help inform volcanic and seismological hazard in the area around Harrat Rahat, as well as help to enlighten our understanding of the evolution and history of the Red Sea rift.
Here's a less backdated post! The last couple weeks have been full of challenges and re-evaluations. We are still working hard on trying to figure out how to run more data and combat the memory issues that have been holding us back from processing more data. At the same time, I've had to take a look at how to approach my project in different ways that would allow me to make more progress. This was a real challenge for me, since I am very much the kind of person who tends to really power through and finish whatever I start - accepting that this project may not work out how I had hoped is definitely difficult, but I think it's a really good lesson to learn. As much as we may wish it did, science (and the world at large) just doesn't always work out the way we want it to!
So, after some discussions with my mentors, I have started working on some slightly different but related work (in addition to continuing to try to figure out the issues that have been holding my project back for several weeks). As I explained in my first post, the work I'm doing is very similar to the work that another student (Alex Blanchette) has done a lot of work on in a different volcanic arc within Saudi Arabia. One of the things I've been very fascinated with since the beginning of this project is looking at focal mechanisms for events that we do find, in order to get a better sense of how the earth is moving in these regions. Because Alex has identified a number of events in the region he is looking at, I'm now looking at focal mechanisms for that data; while it isn't the same region as where I was originally looking, this should still give us a better sense of what is happening in the area. Stay tuned for more on that!
As always, if you're interested in what I do outside of work, check out my personal photo blog here!
This is a VERY backdated post, but I wanted to have a post for every couple of weeks so I can have a sense of how I've progressed over the course of my time here. Weeks 6 and 7 brought good news and bad news: the good news is that I was finally able to start to move forward with the data I got, but the bad news is that the debugging seems to be far from over. I was able to take the data I have, get arrival times and locate potential earthquakes; however, the trouble is that the purpose of this project is to evaluate a large set of data, to find repeating events over a longer period of time than just a day (all we've been able to evaluate so far). Unfortuntely, looking at more data is tricky in that the code isn't well optimized enough to run that much data within the memory constraints we have. All of this is basically to say that while making progress it wonderful, it definitely brings on a whole new set of challenges!
On an unrelated note, I also spent this last week writing and submitting my abstract for the AGU (American Geophysical Union) conference - as did all of the other IRIS interns - and I have to say, I could not be more excited! While it was a little tricky to write an abstract without having any concrete results yet, I'm confident that by the time of the conference, I will have results to present. On top of that, just reading the other abstracts and projects that will be presented at the conference is really exciting - there is so much fascinating work happening right now! And, of course, I can't wait to see and reconnect with all of the other interns from this year. Only four more months!
Hi all -
Long time, no post! Sorry for the disappearance - I have been hard at work continuing to debug, which, while plenty of work, didn't really leave me many tangible things to talk about in a blog post. However, as of the end of Week 5, we officially have (drumroll, please) lift off! That's right, folks - I have DATA!!
I cannot overstate how excited I am to finally have data to look at, and to be through all of this initial debugging/programming work. That said, I think these five weeks of just working on debugging taught me a ton - not just about programming, but about perseverance. I'll admit that I was starting to feel somewhat frustrated about not having been able to do much actual geophysics work, but I think this was an important reminder to me that sometimes you have to work through the tough, less exciting problems before you can get to the glamorous work...and that makes getting to the exciting work just that much more exciting! I think this was also really good reinforcement that I am in the right field; the day I finally had data to look at, I ended up staying at work until 10:30pm (on a Friday) because I was so excited to start working with it. If that isn't proof that this is the right place for me to be, I don't know what is.
So, a quick explanation of what I've actually been doing (see footnotes for an explanation of the more technical language): essentially, as I've explained in previous posts, I am working with raw seismic data and then processing it through a complex algorithm to pull out very low amplitude (and therefore low signal-to-noise), repeated signals. However, the scripts1 I'm working with were written to run on a different cluster2 and some of them haven't been fully tested, which means I am essentially acting as a beta tester for the algorithm. In a way, this has been a really valuable experience in that it has forced me to understand exactly what the code is doing and how the algorithm works under the hood (not just theoretically), which certainly will make it easier to talk about/explain to others later on. At the same time, we are working on finding ways to parallelize3 as much of the code as possible, as the processing time for larger amounts of data is still going to be a bottleneck. What was exciting about Friday was that I was finally able to get my raw data to run all the way through the algorithm and get out "useful" data. This is actually the same data I put into the algorithm, but you can think about it like this: I am handing the algorithm a haystack, and it is picking out the needles for me. The "needles" in this case are specific events (seismic waveforms) that appear at multiple stations and are repeated within the dataset I hand it. If these signals look like what we would expect for an earthquake (i.e., they have characteristic impulse "arrivals"), then we can assume that these are likely real events (not just correlated noise) that we are seeing in the region. The issue of correlated noise is something I am also working on right now - you can imagine, for instance, if a large truck were to drive past some of the sensors, you would also see similar, low signal-to-noise signals on those stations. One way to deal with this is to set thresholds to filter out signals that are only seen on a few stations, particularly on a few stations that are all right next to each other; fortunately, we have enough data that we can selectively chose to look at signals that are only picked up on a significant number of stations (although I am still working on figuring out exactly what "significant" means in this case).
Here is an image of a record section of all the stations (each with three components) that I'm looking at. In this particular section, similar waveforms were detected on 15 components. From here, I will pick arrival times, locate the actual events, plot the events and then try to determine focal mechanisms for each event. With more data, we hope that this will give us a sense of the trend (if there is any) of events in the shallow mantle, which can then inform our interpretations of the geological setting in the area. Stay tuned for more on this!
I have also been having fun on occasion - click here to see my personal photo blog, and make sure to hover over the pictures for more details about what I've been doing in my spare time!
Until next time,
1For those of you who aren't programmers: the algorithm I'm talking about is essentially a series of "scripts" or chunks of code, each of which performs a step of the algorithm. Think of it as Ford's original assembly line for the Model T - each step must be finished before the next in order to turn a bunch of pieces (the raw data) into something useful (the processed data, which in this case is essentially the needles picked out of the haystack).
2A cluster can be thought of a set of several very large, high performance computers. These are typically connected to remotely and shared among multiple research groups. Because we are running huge amounts of data, everything is run on the cluster (a normal computer would take days to complete a process that can be done on the cluster in an hour, if you allocate memory in the cluster effectively.
3Running scripts in parallel means that the code is working on multiple things at the same time - for example, if I have 16 stations, for some steps I can send each station to a different "node" (set of computers) in the cluster to be processed, which is obviously faster than trying to run it all on the same node.
I'm just wrapping up week two here at Stanford, and it has definitely been an eventful week! I had the opportunity to go to two thesis defenses this week, sit in on a meeting at the USGS at Menlo Park, and even watch a few World Cup games with my office mates (while working, of course!).
Work-wise, I've spent pretty much the entire week debugging, which I've been told multiple times now is about 90% of grad student life. As I mentioned in my first post, I am primarily working with an algorithm that was already developed for processing large amounts of seismic data. However, the original algorithm was built to run with different data, for a different purpose, and on a different cluster - all of which essentially means that I get to spend a lot of my time running codes, looking up errors, and trying to tailor the pre-existing codes to fit my work. It may not exactly be the most glamorous work, but I am definitely learning a ton! Also, as another grad student in our group told me, I am essentially a "beta tester" for FAST (the algorithm I'm running) - so, possibly, my work might be able to benefit some other interns down the line. Here's to hoping!
In other news, I had the opportunity last weekend to take the train into San Francisco - it is such a beautiful city! I mostly just wandered around all day, but I got to meet a lot of interesting people and see a ton of the city. I will definitely be going back soon! Here is a picture I took of San Francisco City Hall while I was there:
So thrilled and grateful to be doing such fascinating work in such a beautiful part of the country!
My name is Brianna, and I am going into my fifth year at University of Washington in Physics. I am currently working under Dr. Simon Klemperer at Stanford University, and I am absolutely thrilled to be here! After a wonderful week of IRIS orientation in Socorro, New Mexico, in which I was able to learn how to install seismometers, process seismic data and meet 18 other wonderful IRIS interns, I made my way down here to Palo Alto. And let me tell you - it is beautiful!
On my first day on the job, I actually had the opportunity to present what I thought I would be working on for the summer, based on what I had learned at orientation and through papers I had read. This presentation was in front of Simon (my PI), Alex (the grad student I am primarily working with), and some other summer interns at Stanford. While this definitely felt like jumping into the deep end on my first day, Simon and Alex were both very supportive, and it really forced me to make sure I knew what I was talking about and be very prepared before even starting my internship!
Also within my first week, Simon had me create a list of my goals by week. These were mostly technical goals, but by working through them with Simon and Alex, I was really able to get a handle on how I'm going to tackle the problem that is my research project - which, of course, brings me to my actualy project.
This summer, I will be working on finding brittle-failure earthquakes in the lithosphere below Harrat Rahat, a volcanic field in northwestern Saudi Arabia. Because these earthquakes are quite small (think magnitudes between -1 and 2ish), I will be using an algorithm developed here at Stanford to process my seismic data. This algorithm, nicknamed FAST, is really fascinating (article here, if you're interested) - it essentially identifies all potentially useful waveforms and stores them as binary "fingerprints," which are able to store identifying features of the waveform in a very small amount of data. These "fingerprints" can then be compared across large datasets (I will eventually be looking at year-long datasets from ~20 three-component broadband stations!) to identify similar events. I might just be a nerd, but I think that's pretty darn cool!
After getting settled, this week primarily consisted of downloading a lot of software, getting a hang of QGIS, and then preprocessing one week of data to run though FAST as a test. Preprocessing involved modifying and running some scripts that Alex had written for a very similar project that he did - however, as any programmer knows, it definitely did not go as smoothly as expected! However, after working through a number of roadblocks and a good amount of help from Alex, I was able to get all of the data cleaned up to run through FAST. I am very excited to see what turns up when I take a look at it on Monday morning - stay tuned!
(Picture: me at the beautiful Muir Beach north of San Francisco!)