Mangungane, Mozambique 2016 Earthquake

Here’s a quick update for the Hongtu, China earthquake:

Since Gareth and I need to model about twenty earthquakes, we decided to temporarily forget about this earthquake. The forward model we produced was adequate enough, so we decided to move on.

I am currently in the process of modeling an earthquake that occurred near Mangungane, Mozambique in 2016.

Here is what the unwrapped interferogram for that looks like:

I will go through the steps (again) of creating an inverse model to help break down the process (for me and for you).

The first step is to make a directory for this earthquake under the modeling directory.

Which is through the command mkdir (name of earthquake). Within that directory, I make another one for okinv and data within that data directory I make another one for ascending (since this is an ascending swath). Ultimately, here is what the path reads in the terminal:

~/modeling/mangungane_mozambique_2016/data/ascending

The next step is to split the phase and amplitude. Through the command:

rmg2mag_phs filt_topophase.unw.geo filt_topophase.amp.geo filt_topophase.phs.geo 4533

The numerical value at the end denotes the column number which can be obtained through the command:

head filt_topophase.unw.geo.vrt

which looks like this in the Linux terminal:

I found the head command to be extremely useful because instead of using gedit to open the whole ASCII text file I can immediately view that information in the terminal. There is also the tail command which allows you to view the bottom lines of the text file which can come in handy.

Here is what the phase layer looks like after the amplitude and phase layers are split. This image is ran using the command mdx -r4  filt_topophase.phs.geo 4533

I circled the white blob which is the earthquake, and the arrow is pointing to the river that is cutting right through it.

In addition, I must also split the correlation file using the command;

rmg2mag_phs topophase.cor.geo /dev/null topophase.cor.geo.r4 4533

I then type,

mdx -r4 topophase.cor.geo.r4 4533

This image then appears (the arrow is pointing to the river):

After that process, I can now start copying those over to the ascending directory I made.

This is done through this command:

~/modeling/mangungane_mozambique_2016/data/ascending cp /data/mangungane_mozambique_2016/swath1/merged/filt_topophase.phs.geo .

The first path is where I am currently, the cp stands for copy, and the last path is where the file is and the . at the end is to copy to where I currently am. I do the same for the filt_topophase.unw.geo.vrt and filt_topophase.unw.geo.xml files. This whole step requires me to copy over scripts into the appropriate directory, and then I edit the scripts according to the earthquake information.

Afterwards, I go into the crop_data.m and crop_data_correlation.m scripts to input all the values required to successfully crop and mask the image. These values include the longitude and latitude which in this case is 33.098333333 and -21.0391666666 respectively. I must also approximate where the start of the crop should be, I then enter the crop size of 1536 (which then becomes the new column number), and I also edit the names of the input and output files (I must ensure that the input files are all in the correct directory). This part involves some trial and error and here is the end result:

 Top: Uncropped. Bottom: Cropped

Top: Uncropped. Bottom: Cropped

This end result works well, but when you take a closer look at the image you can see an unwrapping error (Gareth’s very keen eye detected this).

You can see two blobs that I circled and there is an additional error that was too small to circle. He discovered this because when we ran the image using the command, mdx -r4 mangungane_asc_crop_mask_1536.r4 1536, points within the error had values that made no sense together.

I asked Gareth if unwrapping errors were common, and he responded with yes. He also said that fixing those errors is a “dying art” which I found to be interesting.

He noticed the error when we were running the unw.pha_quadtree script on MatLab. As I have mentioned before, running this script initiates the quadtree decomposition process (which is a technique that divides an image into blocks and is the first step for the image compression).  After running that script, this is the resulting image:

The desired result is to have the points (which indicate the center in each block) along the edge of the earthquake. During this part, we ran into a data sampling error due to the combination of the unwrapping error and the river cutting directly through the earthquake.

Fortunately, Gareth was able to resolve the unwrapping error. I had to email him the mangungane_asc_crop_mask_1536.r4 file, and he cut out data in the error that couldn’t be fixed and shifted the bad values to the correct data points in areas that were fixable. He did this on a software called ER Mapper. Then I inputted the fixed file into the unw.pha_quadtree script.

After that, I transferred all the necessary information from that script and inputted it into the los_quadtree.m script. I edit this twice by saving and running it for the incidence and azimuth files. Once this script is ran then these output files on the right side save into the quadtree directory:

apply_qt_boxes.m mangungane_asc_crop_mask_1536_fixed_null.dat
asc_ll.okinv mangungane_asc_crop_mask_1536_fixed_qtc.dat
asc_utm.okinv mangungane_asc_crop_mask_1536_fixed_qtd.mat
azimuth.r4 mangungane_asc_crop_mask_1536_fixed_qti.r4
box_x_out.dat mangungane_asc_crop_mask_1536_fixed_utm.dat
box_y_out.dat pendiffexpcos.m
calc_e.m pendiffexp.m
crop_images.m process_quadtree.m
cvdcalc.m qt_azimuth_crop_1536_qtc.dat
dem_quadtree.m qt_azimuth_crop_1536_qti.r4
detrend.m qt_azimuth_crop_1536_utm.dat
dist_test.m qtgetblk.m
do_quadtree.m qt_incidence_crop_1536_qtc.dat
expcos.m qt_incidence_crop_1536_qti.r4
flatten_ff.m qt_incidence_crop_1536_utm.dat
incidence.r4 unw_pha_quadtree.m
load_dem.m unw_pha_quadtree.m~
los_quadtree.m utm.xy
make_okinv_file.txt

All the files ending in .m were originally copied over scripts (that Gareth typed up when he first showed me the process).

However, the asc_utm.okinv file is made when I type commands into the terminal that I obtain from the make_okinv_file.txt. I input the correct files in the command, change the origin of the local coordinate system values that are obtained from asc_utm.okinv, and I must enter the correct UTM zone. That zone is found when I enter the command ll2utm (x reference and y reference points obtained from cropregdata.dat which comes out to be -36 (negative because it’s south).

Now, once I have the okinv3.inp file I can start inputting the appropriate parameters for the earthquake. I provide a rough estimate as to where the fault is, and I go on the website:

http://www.globalcmt.org/CMTsearch.html

I search the date of the earthquake which is 9/22/16 and here is the information that appears:

Fault plane:  strike=339    dip=38   slip=-108
Fault plane:  strike=181    dip=54   slip=-76

We inputted the second line into okinv3.inp file, and when I typed okinv3 okinv3.inp in the terminal then some of the following files are produced:

asc_utm.okinvdmr.ps okinv3.inp resid.cpt
boot.out gmt.historyparam.outresid.out
displ.out insar.cpt plot_data_model_residual.gmt5restmp

Afterwards, I type the command gedit param.out and enter the appropriate parameters there as well. Once that was saved one of the last steps was to type

csh plot_data_model_residual.gmt5

And Ta-Da!

Inverse model:

The left is the data, the middle is the model, and the right is the difference which is the misfit of the model to the data. Here the misfit is about three mm which is pretty good.

And there you have the inverse model for the Mangungane, Mozambique earthquake folks.

With that complete, I must continue with modeling about eighteen more earthquakes!

Inverse vs. Forward Model

My meeting with Gareth opened my eyes to mistakes I made with the Hongtu, China earthquake I was attempting to model. Errors happen, but I learned more because of those mistakes.

First off, I had the incorrect crop size. I had put 1200 which is too small. This value has to be divisible of two enough times to have an adequate amount of data. He stated that for earthquakes of magnitude five or six a crop size of 1536 will provide sufficient data.

After I fixed that in the scripts, I had to change the shifts in the rows and columns for the correct cropped image; as a result, I also had to change the x and y coordinate for the start of the crop. He informed me that the crop_data.m was a simpler version of the crop_data_correlation.m script and the latter was all that needed to be used. On top of that the quadtree process begins with the unw.pha_quadtree where the minimum and maximum quadtree threshold can be altered as necessary. Most of the values in the script can be arbitrarily altered to get our desired result (meaning equally spaced points).  After saving and running that we get an image that has a bunch of points and if there are clusters of points, then the script needs a different value for the box size or the quadtree thresholds (like I said all of that is arbitrary, and I need to mess with the values). Once we have the boxes for the quadtree, then that info needs to be transferred to the incidence and azimuth scripts. There’s numerous steps to this, and I will gladly elaborate more in a later post.

On another note, not only did I make several mistakes, but I chose an earthquake that has already been the center of discussion and debate. Due to the shape of the earthquake, there are two possibilities on where the thrust fault is. A thrust fault is a low angle reverse fault where the hanging wall is above the foot wall. Essentially, I chose a very interesting earthquake to do my first inverse model on.

I mentioned inverse model, so I will further elaborate on that.  This whole process produces an inverse model which uses the data to retrieve parameters such as strike, dip, rake, and slip using okinv. After Gareth ran through the quadtree steps with me, we realized that we couldn’t do an inverse model.

Typing the command okinv3 okinv3.inp in the terminal creates files called param.out resid.out etc.  

Once we typed gedit param.out we inputted all the parameters we think are associated with the earthquake and we used information from http://www.globalcmt.org/CMTsearch.html, however, since seismologists can’t pinpoint exactly where the fault plane is there were two options and we chose the first one from these two:  

Fault plane:  strike=146    dip=43   slip=83
Fault plane:  strike=335    dip=47   slip=96

With the param.out script, we inputted the appropriate parameters and bounded them. One of those were the coordinates of the fault which was found using the cropregdata.dat file. In there are the coordinates for the start of the crop. The x and y reference points of the cropped image are then added to the values 1012 times the pixel size and 712 times the pixel size respectively to acquire the coordinates in lat long (ll) which then need to be converted to UTM (Universal Transverse Mercator) using the command ll2utm.

There was one issue though. Unfortunately every time we ran mdx, an unstable inversion resulted. That means that the values were going to extremes like no slip or the magnitude was off (by a lot). In addition, the computer kept wanting to exceed the bounds we inputted.

Gareth suggested that instead of an inverse model we’ll do a forward model. This uses oksar3 oksar3.inp which produces a file similar to param.out. This type of model uses parameters and theory to obtain a pattern from mdx.

Unfortunately, I forgot to take screenshots of the end result and the entire process. When I meet with Gareth again this upcoming Friday, we will still attempt to produce an inverse model of this earthquake, and I won’t forget to take pictures this time!  

Modeling Earthquakes

As promised, here is what I learned about how to model earthquakes.

Disclaimer! The process is extremely tedious (or at least that’s what I think).

Before Gareth left on vacation, he showed me how to model an earthquake that occurred in Huarichancara, Peru back in 2016. Here is the end result:

Astrid blog 2 img 1.jpg

I will explain what you are looking at later on.

There are numerous steps to modeling, but I’ll break it down.

In my previous post I discussed the process involved with obtaining interferograms. These interferograms are layered through what is called BIL (Band Interleaved by Line); this is a method that allows for image encoding of data obtained from satellites.

The layering is:    
    Phase
    Amplitude
    Phase
    Amplitude, and so on.

The phase layer is what is used to actually model the earthquake, whereas the amplitude is the topographic aspect used to make the interferogram.

The first step in the modeling process is to separate the phase and amplitude. Before I get too far ahead of myself, I must first make a modeling directory for each of my earthquakes. Then within that, I must make another directory for data and okinv (I’ll explain what that is later) and within my data directory I make another one for ascending or descending (depending on where the earthquake is).

As I mentioned, I need to split my amplitude and phase of the interferogram. The command to do so in the Linux terminal (which is what I use to organize files, make directories, and run programs on the computer) is rmg2mag_phs. This is what allows me to split the unwrapped interferogram into its component parts.

More specifically, what I type into the terminal is this:

rmg2mag_phs filt_topophase.unw.geo filt_topophase.amp.geo filt_topophase.phs.geo 4747

As you can see, I split the interferogram into the phase and amplitude. The number at the end represents the number of columns in the interferogram which is imperative to add at the end of the command.

Gareth showed me an easy way to obtain that column value. Which is this command:

head filt_topophase.unw.geo.vrt

The “head” command shows the first several lines in the file.

After I have made the necessary directories and have now split my phase and amplitude, the next step is to crop and mask the interferogram.

Gareth ran through the steps with me, and recently I have been working on modeling an earthquake that occurred near Hongtu, China. For that earthquake, here is what the unwrapped interferogram looks like:

Astrid blog 2 img 2.jpg

Now that I have the interferogram split, I can now go to MatLab to edit the script crop_data.m where I input the information for the Hongtu, China earthquake. Fortunately for me, Gareth created all the scripts with me, and I can just copy those into the appropriate directories then edit as necessary.

Here is how the crop_data.m script appears on MatLab:

% crop_data.m -- a script to load in various raster data files, view and crop them

% gjf, 03-feb-2017
close all
infile = 'filt_topophase.phs.geo'; % interferogram unwrapped phase file
incols = 5400;% number of columns in that file
outfile = 'asc_crop_1200.r4'; % name for our cropped interferogram file

% open inteferogram file for reading...

infileid = fopen(infile,'r');

% and read it (and transpose it at the same time!)
% (this assumes your file is a float32 [= 4 byte real] file)

indata = fread(infileid,[incols,Inf],'float32')';

% convert to los displacement in meters!

indata=indata*0.0555/(4*pi); % sentinel-1 wavelength is 5.55 cm

% and plot it!
figure
imagesc(indata)
colorbar
axis image

% crop the interferogram down to the part with the earthquake

cropdata=indata(3000:4200,1100:2300);

figure
imagesc(cropdata)
colorbar
axis image

% write out the cropped file

outfileid=fopen(outfile,'w');
fwrite(outfileid,cropdata','float32');% re-transpose it here

% and close your files

fclose(infileid);
fclose(outfileid);

The %’s are put on side notes or on anything I don’t want MatLab to run.

Here is the cropped and uncropped images:

Astrid blog 2 img 3.jpg

 

After this, I need to then mask out anything that is not relevant to the earthquake such as lakes to exclude points of low correlation.

I must do the same for the correlation file and input the correct information in the crop_data_correlation.m script where information on the new registration point (to know where the whole interferogram is referenced to) for my cropped image is inputted.

Here is that script:

% crop_data.m -- a script to load in various raster data files, view and crop them

% gjf, 03-feb-2017
close all
intfile = 'filt_topophase.phs.geo';% interferogram file to be cropped and masked
corfile = 'topophase.cor.geo.r4'; % correlation file (information for mask)
incols = 5400;% number of columns in that file
outfile = 'asc_crop_mask_1200.r4'; % name for our cropped, masked interferogram file
regx = 101.14166666666667 ; % longtidue of registration point
regy = 38.63527777777778 ;% latitude of registration point
dx =0.0002777777777777778 ; % longitude pixel size
dy = 0.0002777777777777778 ;% latitude pixel size

mincor = 0.25; % minimum correlation for mask

startx = 1100;% x coordinate of the start of your crop
starty = 3000;% y coordinate of the same thing
cropsize = 1200;

% open inteferogram file for reading...

intfileid = fopen(intfile,'r');
corfileid = fopen(corfile,'r');

% and read it (and transpose it at the same time!)
% (this assumes your file is a float32 [= 4 byte real] file)

intdata = fread(intfileid,[incols,Inf],'float32')';
cordata = fread(corfileid,[incols,Inf],'float32')';

% convert to los displacement in meters! (only for interferograms!)

intdata=intdata*0.0555/(4*pi); % sentinel-1 wavelength is 5.55 cm

% and plot it!

%imagesc(indata)
%colorbar
%axis image

% crop the interferogram down to the part with the earthquake

cropint=intdata(starty:starty+cropsize-1,startx:startx+cropsize-1); % values substituted with variables
cropcor=cordata(starty:starty+cropsize-1,startx:startx+cropsize-1); % values substituted with variables

% calculate new registration point

shiftx = (startx-1)*dx;% shift in x direction for cropped image to new registration point
shifty = (-1)*(starty-1)*dy;% shift in y direction for same thing

newregx = regx + shiftx; % to find new registration point in x direction
newregy = regy + shifty; % to find new registration point in y direction

% calculate a mask

mask = cropcor>mincor ;
maskint = cropint.*mask ;

figure
imagesc(cropint)
colorbar
axis image

figure
imagesc(maskint)
colorbar
axis image

% write out the cropped file

outfileid=fopen(outfile,'w');
fwrite(outfileid,maskint','float32');% re-transpose it here

% and close your files

fclose(intfileid);
fclose(corfileid);

fclose(outfileid);

percent_masked = (1-(nnz(mask)/cropsize^2))*100;


disp(' ');
disp(sprintf('%6.3f percent of pixels masked at threshold %f',percent_masked,mincor))
disp(sprintf('registration coord.: %f%f',newregx,newregy))
disp(' ');

cropregdata=[regx regy; dx dy; cropsize cropsize];
save cropregdata.dat cropregdata -ascii

Now when I run this script, this is the image that appears:

Astrid blog 2 img 4.jpg

The top image is the unmasked and the bottom is the masked.

At this point, what has been done is the interferogram and correlation files have been cropped and masked and the new registration point has been found in the cropped image.

Next, the LOS (Line of Sight) needs to be be split into the incidence and azimuth which then need to be cropped.

From then on, I need to start the quadtree decomposition. This process is data downsampling for image compression. At the moment, I am having difficulties with this part of the process.

When I figure this out thoroughly (meaning asking Gareth tons of questions when he returns), then I will post more!

Finding Earthquakes Using InSAR

When I first met with Gareth, I felt my brain slowly overloading. I took my notes and tried to process the material presented to me the best I could.

I will lay it out. Essentially, I am an earthquake “finder” which means I am helping Gareth locate as many earthquakes as we can worldwide. We are doing this through InSAR which stands for Interferometric Synthetic Aperature Radar (I’ll explain later). First, we find potential candidate earthquakes, then we download data from this NASA website called EarthData under data discovery Vertex ASF which is the Alaska Satellite Facility.

Here’s the link to the website (you must have an account to be able to download anything though!) and this is what it looks like:

http://urs.earthdata.nasa.gov/

I mentioned InSAR;

Now the data being downloaded is collected by the European Space Agency’s Sentinel-1 satellites. Essentially, the ESA describes it as, “..a constellation of two satellites orbiting 180° apart, the mission images the entire Earth every six days.  As well as transmitting data to a number of ground stations around the world for rapid dissemination, Sentinel-1 also carries a laser to transmit data to the geostationary European Data Relay System for continual data delivery.” I retrieved that bit of information from their website, found here:

http://www.esa.int/Our_Activities/Observing_the_Earth/Copernicus/Sentinel-1/Introducing_Sentinel-1

That’s how I get the data I need to be able to process the interferograms we use to find target earthquakes.

The first step, is to find a good place to start the search.

After I first met with Gareth, my job was to compile a spreadsheet with possible candidates for earthquakes. I was to find earthquakes of magnitude 5.5 and above with depths below 20 kilometers (anything more is undetectable by the satellites).

He provided me with two websites:

http://www.globalcmt.org/CMTsearch.html and http://earthquake.usgs.gov/earthquakes/search/

I use the latter. I customize my search and when I hit the search button, this is what I find:

During this stage, I need to make sure the earthquakes I search for are onshore. Earth is primarily water so the majority of earthquake activity occurs offshore. As you can see on this screenshot, there are only a few potential candidates here.

After this part of the search I keep a detailed record on this spreadsheet.

That is a snippet of what my spreadsheet looks like. I put the location of the earthquake, latitude and longitude, magnitude, depth, date and time. On the far left, I have an interferogram ran column and on the far right columns I have information about the frames and tracks I have downloaded from Vertex. I will show you that as well:

After I have my list of potential earthquakes, I download data on these earthquakes like I have already mentioned. I then match up the location from the USGS site with the map on Vertex and I search a time frame including before and after the earthquake date.

Now comes the more complicated part. I use what is called a terminal, and I take the data I download, organize it, and then I unzip the files. WIth the terminal, I had to learn various commands I need to use to navigate all of the directories on the computer in Gareth’s office. This is what the terminal looks like:

There is the terminal I had for an earthquake in Tajikistan that I was investigating. What you are seeing is the listing of what is in the swath 2 directory of the Tajikistan directory I created.

The terminal consists of directories. We use those directories to organize files and run programs such as the python which is what’s used to run our topsApp.xml files.

At first, this process was confusing and frustrating to me. Now, it’s definitely simpler to use (after understanding the directory structure). This / means root. The best analogy Gareth used to describe it to me was a tree. Pretty much, if you use / then you are going into the root directory. For instance, if I want to access the scratch directory, then I type cd /scratch. In the scratch, that’s where I can access the files we have for various earthquakes. The cd command stands for change directory, so if I type cd /data then I am accessing the files that are placed in data; I needed to use / to denote that I am going into another directory. When I want to see what is in the directory, I use ls which means list.

 I download the SLC files from Vertex into the data directory. SLC stands for Single Look Complex, and it is a file type which is processed for resolution. As I mentioned earlier, with my spreadsheet I have to make a guess as to which swath the earthquake is located in. I also record what is the path which is the track number (the orbit that the satellite takes) and the frame number (current location on the track) of the data I download from Vertex. I can either download ascending and descending data.

Astrid blog 1 img 6.jpg

Here is a quick illustration I made. On the right is ascending (swaths 1, 2, 3) and on the left is descending (swaths 3, 2, 1). 

Each of those strips is a swath. Swaths are divided up into bursts. The satellites have a beam that swings in different directions which results in the bursts (little pieces of data).

After I input the swath, track, and frame numbers for my spreadsheet, I am ready to run the interferogram and hopefully find some earthquakes.

Here are the commands I type into my terminal:

ls / data

create_topsapp_folder.py -sf (data file name)

cd / scratch/ (data file name)

ls

cd swath (pick)

ls

gedit topsApp.xml

topsApp.py topsApp.xml --steps

cd merged /

ls

mdx.py filt_topophase.flat.geo&

That last command runs the interferogram. These steps are much more simplified thanks to a graduate student who had also been running interferograms.

Now I bet you are wondering, what do these interferograms look like?

Like this!

This is an interferogram I ran for an earthquake in Sary-Tash, Kyrgyzstan. The area where you see the color fringes is the earthquake. With the many interferograms that we have ran, the amount of earthquakes we have found is relatively low. Now the reason may be because we are looking in the wrong location, vegetation, and water vapor in the atmosphere creates noise or static as you can see in certain patches here.

When Gareth first showed me an interferogram, I compared it to a tie-dye shirt. Wouldn’t you agree?

Any way, now with the results of the interferogram, I have to take note whether the interferogram is masked (which means the earthquake is overwhelmed or washed out by aftershocks), decorrelated (meaning there is too much atmospheric noise), or coherent. Interferometric correlation means you get a signal, and coherence is when the neighboring pixels are moving the same way which results in lots of color.

The next step, after obtaining these interferograms, is to model the earthquakes.

Gareth has listed me as an author for an abstract he put together about finding earthquakes using InSAR. The title is “A Systematic Study of Global Earthquake Detectability” we are going to model the twenty or so earthquakes he listed in this abstract.

I’ll post more details about the modeling process soon!