In my last few posts all my titles corresponded to names of songs... but since I wanted to talk about the methods I was using in my research I couldn't think of a song off the top of my head that would be appropriate. But no fear! A quick google search revealed that there is actually a band called Tool (http://en.wikipedia.org/wiki/Tool_(band)) so I am able to keep the music-related titles going!
Now that I'm fully submerged in the data analysis aspect of my research (although I still frequently have to reference textbooks and articles to understand why I'm telling the computer to do the things I do), I can describe what tools I'm using to calculate the seismic anisotropy of Rayleigh Waves.
First: the data. I'm combining data from the second phase of the PLUME deployment where 37 OBS stations were placed around Hawaii at a spacing of ~200 km with data from land stations. Hopefully, this pool of data will expand to include data from the earlier SWELL pilot experiment and data from the first phase of the PLUME project, where 35 OBS were placed more densely and closer to Hawaii. Complicating this process is the fact that the instruments had different levels of success at collecting seismic data, ranging from having no data to getting over a year's worth. Additionally, because we don't know the instrument response of the SIO OBS, they have to be analyzed separately from the WHOI OBS and land stations. This image from my mentor's article in Eos Transactions (Laske et al., 2009), shows the locations of the stations and which did and did not provide data.
Second: the analysis. My mentor largely programs in Fortran. I was largely trained on Matlab. So what tends to happen is that when I write the script it's in Matlab, and if my mentor writes the script it's in Fortran. I do most of my analysis by running the Fortran executables through the terminal and writing small scripts such as with awk that automate the process so it's faster and I can do something else while the analysis is running - like read or blog here. I use my Matlab scripts to figure out, with a given station configuration, on which data to run the Fortran executables. I then use GMT to plot the results, and modify the GMT scripts depending on the data I'm plotting and what I am trying to show.
Then I look through the outputs of these processes and identify how to modify the analysis to improve the results. For example, I went through the 309 azimuthal anisotropy measurements yesterday, which were very messy, and identified ~200 for which the trigonometric fit was not a good fit for the data, generally because (1) there weren't enough data, (2) the data were messy, (3) there were some relatively anomalous points with small error bars that skewed the fit - although my mentor set a minimum error threshold so this is less of a problem, or (4) some combination of the problems above. I re-made the anisotropy maps with the better ~100 measurements. Today, I'm going to run my scripts to determine more station triangle combinations to fill in the southern portion of the Hawaiian swell, where data were lacking in the last analysis.
You must be logged in to post a comment.