Reservoir characterization

Pretrained Foundation Model Enables Rapid Seismic Interpretation

This paper presents a self-supervised approach for training a seismic foundation model and demonstrates scenarios in which it is used for seismic data conditioning, interpretation, and inversion through six real-use cases.

This paper presents a self-supervised approach for training a seismic foundation model and demonstrates scenarios in which it is used for seismic data conditioning, interpretation, and inversion through six real-use cases.
This paper presents a self-supervised approach for training a seismic foundation model and demonstrates scenarios in which it is used for seismic data conditioning, interpretation, and inversion through six real-use cases.
Source: SPE 227546.

While robust seismic interpretation and modeling is essential in successful subsurface mapping and characterization, it remains challenging because of its strong dependency on data quality, domain knowledge, and expert supervision. Pretraining of a large seismic foundation model (FM) can provide rich and reliable representation of diverse seismic patterns observed on post-stack seismic images and adopt it into multiple downstream workflows. This paper presents a self-supervised approach for training such a seismic FM and demonstrates its applications.

Seismic FM

Model Architecture. The proposed seismic FM is made of an encoder, decoder, three refiners, and five taskers, which enable multitasking training from a shared embedding space. Specifically, the encoder generates a rich embedding from input seismic and of a vision-transformer-large architecture from the Dino-v2 model pretrained on natural images as the starting checkpoint.

×
SPE_logo_CMYK_trans_sm.png
Continue Reading with SPE Membership
SPE Members: Please sign in at the top of the page for access to this member-exclusive content. If you are not a member and you find JPT content valuable, we encourage you to become a part of the SPE member community to gain full access.