I will be using data recorded from Transportable Arrays and Flex Arrays surrounding the Mississippi Embayment (ME) to understand Moho depths and sedimentary characteristics in this area. This will help to better understand the geologic history of the ME and the activity of the New Madrid Seismic Zone (NMSZ). I will perform Cross Correlation and Auto-Correlation through the HPC at CERI to analyze data at each station to isolate the green's function in the ME. The Auto-Correlation will determine the reflectors. Dispersion measurements will then be made and the group and phase velocities will be calculated. Finally tomography of the Northern ME will be constructed.

For this week I will be posting an image of a GUI I have recently been using to look at stacks of autocorrelations of my data.

This Gui is very useful for identifying outliers and seeing if the 9 month stack of data correlates with the majority of my autocorrelations.

Each waveform is 10 hours of data with each windown containing 25 waveforms.

Unfortunately I cannot post an image to this blog at this time but I will try to post it in the comments section or in another blog post.

I have finally completed all of the programs I need to gather station data, autocorrelate, and stack.

In my arsenal: MATLAB code to build URL to download station data, SAC Macros to convert from velocity to displacement, MATLAB code to Autocorrelate, Shell Scripts to organize the data, stack, and then delete the raw data.

I will be processing 150 TA stations for around two years of data. Each data set is around 10 GB and takes around 19 hours on the CERI Macs. After my first dry run of gathering station R43A, I will hopefully use the HPC (High Power Computer) to run the rest at a much quicker rate.

Unfortunately I cannot use more than 50 GB on the CERI Macs so I will need to quickly delete the data when I am done processing it.

While I wait for the 19hours of gathering the first data set, I will be trying to make my code more efficient and allow more automated processes.

I was able to get the autocorrelation MATLAB code working with minimal problems. Now I am stacking the data.

First I stacked it with a few days, then a week. Both yielded some signs of reflectors, but the process takes about 5 hours to gather the data, process it, and then stack it for only a month's worth of data.

For week 5, I will be making Macros, Awk commands, and more Matlab code to streamline this process.

The Gui code is not working at all, so I will be simply plotting them in SAC and print screening them to get the images.

Most of my work last week was on trying autocorrelation through SAC with different parameters. I tried multiple ranges for the first and second bandpass, different frequency limits for removing the instrument response, and different highpass filters.

After running the seismograms by my graduate advisor Blaine, we concluded that it would be a better idea to work on perfecting a code that another grad student years back put together to identify reflectors within large volumes of data through autocorrelation.

I have now two sets of MATLAB code: one set deals with the data aquisition and autocorrelation, the second deals with plotting these figures through a GUI.

I have spent two days so far on the first set, and I can comfortably say that this student does not like commenting or efficiency with his code.

I annotated approximately 90% of the first set of code, and later today I will finish the other 10% and figure out the looping pattern with pen and paper. After that I will move organize the programs and start using it to autocorrelate the data.

For any interested, I will be opening up a forum for MATLAB help and suggestions. I can also send anyone the code I am working on with annotations if they want a good reference for seismic analysis code in MATLAB.

For my project, I will be using the NELE data and some of the TA data. I am currently using JWEED to access the data from IRIS.

I will be working with approximately two years of raw data at nearly 80 stations. Many faculty at CERI use the NELE data, but as far as I know, I will be the first to deal with such a large quantity of ambient noise data.

I will also be using SAC to clean up, correlate, and stack the data. I will be doing most of this through the High Power Computer (HPC) at CERI.

To plot and display the data, I will be using both Matlab and GMT.

The main structure of this week was to familiarize myself with CERI, obtain a University username, and establish a CERI account for the computer labs. I was just able to accomplish the final task of obtaining my CERI account and was given a quick tutorial on how the Macs work. I spent most of this week studying what will be accomplished during my project and some of the methods of doing so. While I am still struggling with how a lot of processing works, I better understand the basics of seismic processing and what the information in a seismogram gives me. I also was able to better familiarize myself with Matlab and SAC. I met with Dr. Langston today, and we outlined the structure for my project into four sections. 1) correlate to get the source-receiver waveforms through cross-correlation and auto-correlation. 2) Dispersion measurements to find group and phase velocity. 3) Construct Tomography of the Norther ME. 4) Use Structural Inversion to get a map of the shear velocities at varying depths. Knowing these tangible goals, I am much more confident in my ability to perform each of them. I also was able to take a look what the programing will look like and what style of data I will be dealing with. The first goal of correlating and organizing the data from all of the stations should take roughly a month, but could be longer since a huge part of my project is locating the reflectors from autocorrelation. Dr. Langston discussed July 7th as being a rough goal for accomplishing this first part.