shenghanlin / SeismicFoundationModel

Seismic Foundation Model
67 stars 16 forks source link

Hanlin Sheng 1Xinming Wu1,†,‡Xu Si1Jintao Li1
Sibo Zhang 2Xudong Duan 2
1 University of Science and Technology of China  2 Huawei 
† Corresponding Author  ‡ Project Lead 
----------------- # 🌟 Seismic Foundation Model (SFM) As shown in this workflow figure, we test the Seismic Foundation Model's performance in segmentation tasks and regression tasks, specifically in classification (i.e. seismic facies), segmentaion (i.e. seismic geobody), signal processing (i.e. denoising), inversion (i.e. reflectivity estimation), and interpolation. This is a PyTorch/GPU implementation of the paper [Seismic Foundation Model](https://arxiv.org/abs/2309.02791): ``` @article{sheng2023seismic, title={Seismic Foundation Model (SFM): a new generation deep learning model in geophysics}, author={Sheng, Hanlin and Wu, Xinming and Si, Xu and Li, Jintao and Zhang, Sibio and Duan, Xudong}, journal={arXiv preprint arXiv:2309.02791}, year={2023} } ``` ## 🌟 News * **2023.9.7:** Paper is released at arxiv, and code will be gradually released. ⌛⌛⌛ * **2023.8.7:** Github Repository Initialization (copy from Meta-Transformer). ## 👉 Pre-train & Fine-tune Code * The pre-training instruction is in [PRETRAIN.md](SFM-Pretrain/README.md). * The Fine-tuning instruction is in [FINETUNE.md](SFM-Finetune/README.md). ## :rocket: Model Zoo & Data Release Open-source Pretrained Models
| Model | Pretraining Size | Download | |------------|:--------------------------:|:----------:| | SFM-Base | 224 × 224 | [ckpt] | | SFM-Base-512 | 512 × 512 | [ckpt] | | SFM-Large | 224 × 224 | [ckpt] | | SFM-Large-512 | 512 × 512 | [ckpt] | Open-source Training & DownStream Fine-tune Task Data
| Task | Size | Download | |:------------------:|:--------------------------:|:----------:| | PreTrain | 224 × 224 | [DatFile] | | Seismic Facies Classification | 768 × 768 | [DatFile] | | Seismic GeoBody Identification | 224 × 224 | [DatFile] | | Inversion (Reflectivity Estimation) | 224 × 224 | [DatFile] | | Signal Processing (Denoise) | 224 × 224 | [DatFile] | | Interpolation | 224 × 224 | [DatFile] | # :neckbeard: Quick Guide ## Installation To prepare the environment, please follow the following instructions. ```shell # create virtual environment conda create -n SFM python=3.9.12 conda activate SFM # install pytorch pip3 install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html # install other requirements pip install -r requirements.txt # if you want to visualize the results as shown in SFM-Finetune/Application/visualization.ipynb pip install jupyter notebook python -m ipykernel install --user --name=SFM --display-name="Python (SFM)" ``` ## Download Dataset & Model Place the downloaded dataset and model in the corresponding folder. - If you want to obtain a foundation model pre-trained from scratch, Download the ```Pretrain data``` zip file in ```Data``` folder. ```shell # First execute merge zip -s 0 mae_data_more.zip --out pretrain.zip # Unzip the merged compressed file unzip pretrain.zip ``` - If you want to use our pre-trained model directly, Download ```Pre-trained model``` and place it in folder ```SFM-Pretrain/output_dir``` ```shell cd SFM-Pretrain mkdir output_dir cd output_dir ``` - If you want to apply the model to downstream tasks, Download the DownStream Task data zip file in ```Data``` folder. ```shell cd Data unzip *.zip ``` ## Facies Example 1. Download the DownStream Facies Task model [facies.pth](https://rec.ustc.edu.cn/share/2c102b40-057f-11ef-9b0d-cd9b2fe068c4) and place it in folder ```SFM-Finetune/Application/Facies/SFM-Finetune/``` 2. Download the DownStream [Facies Data](https://rec.ustc.edu.cn/share/d6cd54a0-e839-11ee-982a-9748e54ad7a4) and place it in folder Data/ then ```unzip *.zip``` 3. run the following code: ```shell cd SFM-Finetune/Application #Use jupyter notebbok to open visualization.ipynb jupyter notebook ```
# License This project is released under the [MIT license](LICENSE).