Transdimensional Bayesian inversion to infer kinematic finite-extent fault models for large earthquakes
A unique software package for Parametric Slip Inversion (PSI) of earthquake fault-slip models within a full Bayesian framework (Hallo and Gallovic, 2020). The framework is designed as transdimensional and data-driven, meaning the model complexity is inferred directly from the data. This is implemented by a mathematical Occam's razor inherent to the Bayesian formulation. The transdimensional model space is sampled using Markov Chain Monte Carlo (MCMC) with Parallel Tempering, enabling efficient exploration of high-dimensional and non-linear parameter spaces. The code is parallelized in Fortran using MPI (CPU), specifically optimized for deployment on High-Performance Computing (HPC) clusters and large-scale seismic inversions.
Hallo, M., Gallovic, F. (2020). Bayesian self-adapting fault slip inversion with Green's functions uncertainty and application on the 2016 Mw7.1 Kumamoto earthquake, Journal of Geophysical Research: Solid Earth, 125, e2019JB018703. https://doi.org/10.1029/2019JB018703
Bayesian Inference, Markov Chain Monte Carlo (MCMC), Uncertainty Quantification, High-Performance Computing (HPC), Code Parallelization by MPI (CPU), Transdimensional Bayesian Inference, Data-driven Inversion, Occam's razor
The official software version is archived on Zenodo:
-
2.0 (Dawn) — Numerical Engine Revision | December 2019
- Enhanced Occam's Razor: Improved Bayesian model selection logic to more effectively infer optimal model complexity from observed data
- Memory Optimization: Implemented REAL4 (Fortran) precision for covariance matrices, significantly reducing memory footprint for large-scale HPC inversions
- Advanced Visualization: New 2D fault-surface plotting tools for posterior distributions
- Key Publication: Core implementation used by paper published in Journal of Geophysical Research: Solid Earth (Hallo and Gallovic, 2020)
-
1.0 (Chasm) — Initial Release | January 2018
- Internal-only version
- This version served as the core computational framework for the author's PhD thesis
-
Compiled
pt.f90- Parallel Tempering library (M.Sambridge)
- http://www.iearth.edu.au/codes/ParallelTempering/
-
Compiled
time_2d.c- Finite-differences computation of 2D travel time library (P.Podvin)
- https://github.com/fgallovic/RIKsrf/blob/master/src-RIKsrf/Time_2d.c
-
Compiled
gr_nez.forandcnv_nez.for- Discrete wavenumber method for Green's functions computation (M.Bouchon)
- https://github.com/fgallovic/LinSlipInv/tree/master/src-dwn
-
LINUX/UNIX machine with
LAPACKorMKLlibraries -
Fortran90 (
gfortranorifort) and MPI (mpif90ormpiifort) compilers -
MATLAB R2016b or newer for plotting results
dwn- Directory containing Green's functions computationexamples- Directory containing examples of input filesinput- Directory with input filesinv- Work directory for the ongoing inversionlib- Directory with compiled libraries and additional functionssrc- Directory with source codes of the PSI (includesMakefile)plot_prepare.m- Plots prepared suf-faults for Green's functionsplot_psi_fit.m- Plots final data fitplot_psi_model.m- Plots final fault-slip modelplot_psi_posteriorPDF.m- Plots ensemble statisticsrun_psi.sh- Run the PSI inversionrun_res.sh- Run post-processing of the PSI inversion
- Compile Green's functions computation codes in the
dwnfolder - Copy the required third-party
PTandTime_2dlibraries intolibfolder - Set your compilers in
Makefilein thesrcfolder and type:
make- Check
stations,prepare,waves,psi_sp,psi_ap, andpsi_ppbinary programs in the project root directory
- Set input parameters into files of the
inputfolder - Run
./stationsto prepare list of stations in X,Y coordinates (or prepare it manually) - Run
./prepareto prepare input files for Green's functions computation - Compute Green's functions and copy
NEZsor.datinto PSI root folder - Prepare observed data into
NEZwave.datfile in the PSI root folder (ASCII file) - Run
./wavesto prepare vector of observed datavar_Unez.tmp - Execute PSI inversion using
./run_psi.shbash script - Execute PSI post-processing using
./run_res.shbash script - Plot and see results in the
invfolder by MATLAB scripts in the PSI root folder
Note: See connected example files for the structure of ASCII input files and observed data
The flowchart below illustrates the Manager-Worker paradigm implemented via MPI. A dedicated Master node coordinates the communication and task distribution (Parallel Tempering swaps), while multiple Worker nodes perform independent MCMC sampling in parallel to maximize HPC throughput.
Copyright (C) 2017-2019 Miroslav Hallo and Frantisek Gallovic
This program is published under the GNU General Public License (GNU GPL).
This program is free software: you can modify it and/or redistribute it or any derivative version under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This code is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY. We would like to kindly ask you to acknowledge the authors and don't remove their names from the code.
You should have received copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.