I chose this title above because parts of my research this summer - while very meaningful - involves plugging data into a script and evaluating whether the results are reasonable. A lot of the early-stage work involved in my project has already been completed - so it's likely I won't be seeing the original seismic data during my analysis this summer. I hope to gradually learn what other people did to process the original data, though, and to better understand the dataset I've been handed to analyze.

My current task is to run scripts that calculate the azimuthal anisotropy of Rayleigh waves around the Hawaiian Plume at various frequencies (where lower frequencies sample deeper). Although a lot of research has already been published about the Hawaiian Plume data including various forms of tomography and receiver functions, this analysis will be the first to show how anisotropy might change with depth.

My most recent idea was to maximize the number of anisotropy measurements we could obtain from the data available. Measurements are made by comparing arrivals across a triangle of three stations. From talking to my mentor and looking a previous data - I approximated size and shape contraints for "good" station triangles (closer to equilateral, not too small or big) and designed a script that would output every 3-station combination that met the criteria - rather than eyeballing like in the past. The general algorithm is outlined in the flowchart:

I was really excited at first because I was initially working with a station list that included extra stations whose data we can't actually use because we don't know the instrument response - so I was getting well over 50 triangles even with somewhat stringent size and shape criteria (all angles between 30-120 degrees, average side length 0.5-3 degrees). Later when I removed the Scripps stations we weren't using, the picture below shows the sad effect this realization had on the number of good triangles that could be used to calculate anisotropy across the region.

Perhaps the data from land stations can be added, or SIO instruments can be compared to one another time permitting. For now though, the figure on the bottom right illustrates the 36 yellow (from criteria applied to all stations) and 7 red (from more liberal criteria applied to NE stations because of otherwise sparse sampling) station triangles that I will be analyzing over the next few days.

Overall, though, I'm really optimistic about the effects this method of determining station triangles will have on the analysis, as it imposes more consistent quality constraints while providing more combinations of triangles than had been determined manually in the past.

*You must be logged in to post a comment.*