AI/machine learning

Guest Editorial: Shale to Silicon Where Oil and Gas Fits Into the Fifth Industrial Revolution

This guest editorial explores the rise of agentic AI and its potential impact on oil and gas professionals.

robot finger touch glowing hud interface with agentic ai text in center - futuristic digital technology concept on dark background3959177.jpg
Source: PonyWang/Getty Images.

As we begin 2026, the US remains the world’s largest producer of crude oil, averaging about 13.6 million B/D in 2025. While growth appears to be slowing and output is expected to drop modestly to 13.5 million B/D in 2026, according to recent figures from the US Energy Information Administration, the US has nonetheless reminded us that exploiting fossil fuels is not the ultimate problem.

The bigger challenge is how we minimize or eliminate emissions while meeting global energy needs.

In 2019, I wrote about our industry’s quest for operational efficiency and last year I described an artificial intelligence (AI) shockwave. Today, I believe we are also witnessing the convergence of shale and silicon logic, driven by what is now called the Fifth Industrial Revolution.

This new era is one in which human ingenuity and machine reasoning will coevolve to confront the dual challenge of energy security and environmental responsibility. The effort will be guided by the twin imperatives of stewardship and ecocentrism.

To understand what makes this shift a key part of the Fifth Industrial Revolution, consider the arc of industrial progress.

The First Industrial Revolution introduced mechanization through steam power. The second harnessed electric power to enable mass production. The third brought digital computing and automation. The fourth, which I described in 2019, linked the digital and physical worlds through the industrial internet of things, cloud computing, and advanced analytics.

The fifth transcends connectivity with autonomous agents that perceive, learn, and act. With these systems, humans and machines collaborate to solve previously intractable problems. In oil and gas, this means advancing from digitally assisted operations to self-optimizing systems that balance competing objectives in real time.

This article explores how that transformation is unfolding, what it means for our profession, and why it represents this pivotal moment. The impact of AI on oil and gas has accelerated through three distinct waves, each building upon the last, with the third wave defining the Fifth Industrial Revolution itself.

AI’s Third Wave: Agents, Autonomy, and Adaptation

The first wave of AI, from 2015 to 2020, built the physical foundation. This involved specialized chips and cloud infrastructure that enabled large-scale machine learning. A second wave from 2020 to 2024 centered on the capabilities made possible by large language models (LLMs) and generative AI-powered by graphics processing unit (GPU) acceleration that expanded what machines could create, from automated reporting to the synthesis of realistic data.

The third wave, from 2024 to present, has been described by some to be about agency. This involves autonomous digital systems that plan, act, and learn continuously without explicit instruction.

An AI agent is not only a model that answers questions, but one that is also capable of handling key tasks without constant supervision. These routines include monitoring conditions, retrieving relevant information, performing analyses, and adapting strategies as new data arrive.

For example, a drilling agent might search technical literature, perform optimization using specialized numerical tools, and simulate well trajectories in a reservoir-modeling software. It can refine its approach continuously as new information emerges. In essence, an agent delivers value by solving multistep problems and running workflows that once required teams of specialists.

From Sense–Interpret–Compute–Act to Pretrain–Post-Train–Infer

For decades, our workflow followed a strict sequence: sense, interpret, compute, and act. Seismic surveys and wireline logs provided raw data, then engineers interpreted them, models predicted outcomes, and finally, operators acted on results. Each step awaited the previous one in a disciplined chain of causality.

AI reshapes this sequence with pretraining, post-training, and inference. This routine is also inherently sequential, a concept introduced by the LLM community to describe the layered learning stages that define modern AI systems.

However, unlike traditional workflows, these stages now interact and overlap. A model pretrained on historical well data may continue refining itself during post-training as new field data arrive, while inference at the edge can feed back insights that trigger retraining upstream.

This converts a linear workflow into a continuous learning feedback loop, where sensing, interpretation, computation, and action reinforce one another in near real time. This feedback-rich cycle marks the true transition from digital automation to adaptive intelligence.

To frame these three phases in the context of the oil field, it is useful to revisit two concepts from my earlier writings that are linked above, both inspired by Russo and Schoemaker's seminal work on metaknowledge.

I drew a distinction between knowledge and metaknowledge to offer a practical guide to intelligent decision-making.

Knowledge represents the vast body of information we have accumulated. Metaknowledge, by contrast, is our awareness of what is relevant to the problem at hand. It provides our sense of what to look for, what to trust, and what to ignore. As I wrote previously, knowing when to see a doctor (i.e., metaknowledge) is often more important than knowing medicine itself (i.e., knowledge).

In the AI era, machines process vast knowledge, but metaknowledge remains uniquely human. I extended this framework to information flow: data at rest (i.e., archived interpretations) vs. data in motion (i.e., real-time operational streams).

Pretraining, Post-Training, and Inference

Pretraining builds primary knowledge from massive datasets such as archived data at rest. Models pretrained on well logs, production histories, and seismic surveys develop intuition about reservoir behavior analogous to experienced engineers, but across orders of magnitude more data.

Post-training refines that primary knowledge into domain-specific metaknowledge, augmented by data in motion, for applications such as reservoir management or drilling optimization. For example, a post-trained model learns the peculiarities of a specific acreage in the Permian Basin such as how rock properties, completion designs, and operational constraints deviate from global patterns.

Inference is where machines think in real time by applying knowledge derived from data in motion to dynamic processes. Unlike traditional computing, where processing stops once an answer is delivered, inference never rests. Every AI prompt, model update, and optimization cycle consumes compute power continuously.

In the oilfield sector, one of inference's most critical roles is anticipating events. This includes detecting early signs of bit failure, recognizing when a logging tool may become stuck, or predicting when a producing well might require stimulation.

When an AI agent detects an anomalous pressure gradient in Well #247, it is not executing a preprogrammed script. It is reasoning across thousands of similar and dissimilar scenarios it has learned, weighing probabilities, and generating recommendations that evolve as new data arrive.

The computational implications of AI's layered learning are profound. Pretraining is episodic and energy-intensive, often requiring weeks of processing on massive GPU clusters. Post-training is iterative, unfolding over days or weeks as models are fine-tuned for specific fields.

Inference is continual and consumes modest but constant compute resources at every wellsite, in every control room, and during every optimization cycle, operating 24 hours a day, 365 days a year.

Accelerated Computing

For the oil and gas sector, these AI learning stages require new digital infrastructure. The list includes cloud platforms for pretraining, enterprise clusters for post-training, and edge AI chips for real-time inference. These three layers—hyperscale data centers, regional hubs, and intelligent edge devices—are becoming as essential as reservoirs and processing plants.

Powering this infrastructure is accelerated computing. While CPUs handle tasks sequentially, GPUs contain thousands of cores working in parallel, making them ideal for simulating thousands of reservoir realizations simultaneously. This reduces hours-long processes to minutes. With tensor processing units, machine learning operations are executed with exceptional speed.

Together, these systems enable hybrid workflows where physics-based simulation and AI prediction operate side by side, fundamentally expanding what questions we can ask and what accuracy we can achieve. Yet the most critical element remains human judgment which holds the capacity to guide, validate, and deploy technology with purpose.

Human-Machine Collaboration

AI has not replaced human talent; it has amplified it while transforming what expertise means. In an era of machine-generated insight, this means human judgment remains our greatest safeguard. For example, fact-checking AI outputs against physics has become a new form of professional ethics.

The next generation of engineers will not merely interpret data but curate intelligence by deciding when to trust, when to override, and when to retrain the machine. This demands domain intuition to recognize when "something doesn't look right,” and apply ethical awareness to balance production with emissions and technical depth to distinguish physics-based models from probabilistic AI reasoning.

The industry's most successful teams will cultivate this hybrid literacy, where humans and algorithms collaborate as peers to amplify the other's strengths.

This collaboration is already taking shape. In November 2024, ADNOC launched ENERGYai at the Abu Dhabi International Petroleum Exhibition and Conference (ADIPEC) which it described as the world's first agentic AI system for the energy sector. Trained on 80 years of company data, ENERGYai performs autonomous tasks across the value chain from seismic interpretation to real-time process monitoring. According to ADNOC, this has accelerated geological modeling for CO₂ storage by 75% and cut development planning from years to weeks.

These breakthroughs illustrate agentic AI's power to create systems that coordinate complex workflows, learn continuously from petabytes of data, and integrate with human oversight to remain grounded in physics.

As these systems become more sophisticated, fundamental questions about their role emerge. The most provocative of which is whether they represent merely better software or something fundamentally different. In other words, will agents replace software as we know it?

Traditional software executes predetermined logic. If “event A” occurs, then “event B” follows. AI agents, by contrast, reason about problems, make event-driven decisions, and adapt as conditions change.

As their capabilities grow, the line between “software application” and “autonomous collaborator” may begin to fade, redefining software from a static tool to a dynamic partner. LLMs, the hallmark of the second wave, provide the reasoning foundation that enables this agentic behavior in the third. They will not replace traditional software outright, but increasingly will augment and, in some domains, subsume its functions.

The most likely outcome is a hybrid landscape in which specialized applications coexist with general-purpose agents, each contributing according to another’s strengths.

Multi-agent systems now coordinate drilling, production, and facility operations. Key roles include optimizing schedules, predicting failures, and managing the trade-off between efficiency and emissions. These pilots may mark the arrival of “agent engineering,” the science of governing how autonomous systems reason, collaborate, and remain anchored to physics. Just as reservoir engineering once defined the industry's technical core, agent engineering may soon define its digital frontier.

Major operators are also quietly becoming energy-data hyperscalers by managing both hydrocarbon flow and information flow. A single fiber-optic cable installed in a horizontal well can generate over a terabyte of data each day. And with thousands of wells worldwide, the largest operators now process data volumes rivaling that of technology companies. The modern oilfield has become a distributed data center, where energy production and information generation advance in parallel.

However, beneath this promise lies a disquieting reality. AI's capabilities are advancing at a pace that may soon outstrip our ability to comprehend, validate, or control them. The gap between machine AI prowess and human wisdom grows wider with each breakthrough, and the consequences remain profoundly uncertain.

At the Saddle Point

Every barrel of oil produced today carries a dual ledger: energy delivered and emissions generated. These two opposing gradients, one to maximize and one to minimize, define the industry’s new saddle point. Holding them in balance, I believe, will depend on a precise understanding of physics and AI inference.

The Fifth Industrial Revolution compels us to navigate this balance with wisdom and conscience. Machines can calculate, optimize, and predict, but only humans can decide what should be optimized and why. The responsibility for alignment between machine reasoning and physical truth, between progress and preservation, remains ours.

In this new era, value creation and emission reduction are no longer opposing goals. They are two unknowns in a pair of simultaneous equations, solved through intelligence that will be one part human and one part artificial. If we can hold that balance, grounded in physics, guided by data, and governed by ethics, the industry can move toward a more harmonious equilibrium.

Michael Thambynayagam, SPE, is a retired scientist and research leader whose career spanned more than 35 years in the oil and gas industry. He served as managing director of SLB Gould Research in Cambridge, England, and as general manager of the Abingdon Technology Center in Oxford. He is best known for his work on the mathematics of diffusion, compiled in The Diffusion Handbook: Applied Solutions for Engineers (McGraw-Hill, 2011), and was a recipient of the 2011 PROSE Award for Excellence in Physical Sciences, Mathematics, and Engineering. Thambynayagam holds a PhD in chemical engineering from the University of Manchester and was elected Fellow of the Institution of Chemical Engineers in 1984. He is also an SPE Distinguished Member.