While robust seismic interpretation and modeling is essential in successful subsurface mapping and characterization, it remains challenging because of its strong dependency on data quality, domain knowledge, and expert supervision. Pretraining of a large seismic foundation model (FM) can provide rich and reliable representation of diverse seismic patterns observed on post-stack seismic images and adopt it into multiple downstream workflows. This paper presents a self-supervised approach for training such a seismic FM and demonstrates its applications.
Seismic FM
Model Architecture. The proposed seismic FM is made of an encoder, decoder, three refiners, and five taskers, which enable multitasking training from a shared embedding space. Specifically, the encoder generates a rich embedding from input seismic and of a vision-transformer-large architecture from the Dino-v2 model pretrained on natural images as the starting checkpoint.