Data mining/analysis

Zero-Shot Learning With Large Language Models Enhances Drilling-Information Retrieval

This paper tests several commercial large language models for information-retrieval tasks for drilling data using zero-shot, in-context learning.

Fig. 1—A simplified chart of a RAG-based LLM question/answer process with context application.
Fig. 1—A simplified chart of a RAG-based LLM question/answer process with context application. The green boxes show document retrieval, and blue boxes show LLM model completion. This paper focuses on evaluating the different LLMs in the blue part while controlling the retrieved context.
Source: SPE 217671.

Finding information across multiple databases, formats, and documents remains a manual job in the drilling industry. Large language models (LLMs) have proven effective in data-aggregation tasks, including answering questions. However, using LLMs for domain-specific factual responses poses a nontrivial challenge. The expert-labor cost for training domain-specific LLMs prohibits niche industries from developing custom question-answering bots.

×
SPE_logo_CMYK_trans_sm.png
Continue Reading with SPE Membership
SPE Members: Please sign in at the top of the page for access to this member-exclusive content. If you are not a member and you find JPT content valuable, we encourage you to become a part of the SPE member community to gain full access.