Data & Analytics

Case Study: An Agentic AI Framework for Large-Scale Well Modeling of Offshore Field Developments

Two examples from ONGC show how supervised AI-driven automation scaled well modeling across hundreds of offshore wells, saving more than 1,000 engineering hours.

Oil and gas production platforms offshore India. Source: Oil and Natural Gas Corporation (ONGC).
Oil and gas production platforms offshore India.
Source: Oil and Natural Gas Corporation (ONGC).

Well modeling is a foundational activity in production engineering for artificial lift systems. Engineers routinely construct physics-based models to match flowing gradient survey (FGS) data, generate inflow performance relationship (IPR) and vertical lift performance (VLP) relationships, analyze pressure-temperature (PT) profiles, and evaluate tubing-size and gas-lift sensitivities.

These workflows directly support decisions related to production optimization, stimulation planning, and liquid-loading diagnostics. In practice, however, such modeling remains highly manual and time intensive.

For a single well, constructing and calibrating a physics-based model in a commercial simulator often requires several hours of focused engineering effort.

When the number of wells increases to hundreds, as is common in large offshore developments, the task becomes prohibitively slow. As a result, many studies are either deferred, restricted to limited subsets of wells, or simplified to meet time constraints. Each of these approaches may compromise decision quality.

To address this scalability challenge, agentic artificial intelligence (AI) was deployed for India’s Oil and Natural Gas Corporation (ONGC). The agentic AI enabled an automation framework that could complete large-scale well modeling in a rapid and repeatable manner while under the supervision of engineering teams (Fig. 1).

Fig. 1—An AI-enabled command-line interface tool in operation for automated well modeling. Source: ONGC.
Fig. 1—An AI-enabled command-line interface tool in operation for automated well modeling.
Source: ONGC.

The framework integrates a domain-specific Python library with an AI-driven command-line agent operating on SLB’s Pipesim engine. The central objective is not to replace engineering judgment, but to remove repetitive manual steps so that engineers can focus on interpretation and decision-making rather than model construction.

Agentic-AI Architecture

The automation framework consists of three tightly integrated components.

Well-Analysis Python Library

A custom Python library, named well-analysis, was developed to encapsulate the most common well-modeling operations in Pipesim. The library provides compact and reusable functions to define tubular geometry and completion configuration, black-oil PVT (pressure-volume-temperature) properties, reservoir inputs and productivity models, artificial-lift settings, IPR generation, VLP correlation selection, PT profiling, nodal analysis, and FGS-based model calibration.

Conventional Pipesim scripting for these tasks often exceeds 100 lines of code per modeling scenario. Using the library, the same operations can typically be expressed in fewer than five lines, which dramatically simplifies automation, maintenance, and reuse.

Small Language Model Agent

A lightweight AI agent powered by a small language model (SLM) orchestrates the modeling workflow. The agent communicates with the simulator through a model context protocol server that exposes the functions of the well-analysis library as actionable tools.

Engineers provide high-level natural-language instructions such as building and calibrating models from a data file, generating IPR-VLP and PT plots, or executing tubing sensitivity studies. The agent translates these instructions into structured execution steps while preserving full traceability of inputs, actions, and outputs.

File-Driven Interface and Validation

The agent ingests loosely formatted Excel or CSV files containing well and survey data. Because field data often vary in structure and completeness, the first step is automated schema discovery and validation. Missing or inconsistent inputs are flagged instead of being silently assumed, since well modeling is safety- and decision-critical.

Validated data are then used to automatically construct Pipesim physics-based models, perform iterative FGS-based calibration, and generate the requested outputs. Results are written to a structured output directory, including plots, spreadsheets, and execution logs.

During system development, a group of production engineers independently recreated a subset of models manually inside Pipesim and compared them with agent-generated models. The automated outputs were found to be fully consistent with manually constructed models, which established strong operational confidence.

Automated Well-Modeling Workflow

The end-to-end execution workflow of the agentic AI framework is illustrated in Fig. 2. The process begins when the user initiates the AI agent through the command-line interface and provides loosely coupled Excel or CSV data sets containing well and survey information. The agent, driven by a SLM, first compiles and validates the input data structure. If any mandatory information is missing or inconsistent, the agent immediately requests clarification from the user before proceeding.

Fig. 2—The end-to-end execution workflow of the agentic AI well-modeling framework. Source: ONGC.
Fig. 2—The end-to-end execution workflow of the agentic AI well-modeling framework.
Source: ONGC.

Once the inputs are verified, the agent generates executable simulation code using the in-house well-analysis Python library. This library abstracts all Pipesim model-building components, including geometry, fluid properties, reservoir definitions, and artificial-lift parameters. The compiled code is then executed by the Pipesim engine in batch mode.

During execution, detailed log files are generated and continuously analyzed by the agent to detect convergence issues, numerical instability, or data inconsistencies. If execution is unsuccessful, the agent automatically reconfigures the workflow or requests additional user input. Upon successful completion, all requested simulation outputs, including IPR-VLP results, PT profiles, and sensitivity analyses, are automatically organized and stored in a dedicated output folder.

This closed-loop execution design enables fully autonomous, yet engineer-supervised, large-scale well modeling with built-in validation, logging, and recovery mechanisms.

Case Study 1: Large-Scale Model Generation for Stimulation-Planning Support

The first application of the framework involved a large offshore development consisting of approximately 600 wells. The engineering team required updated and calibrated well models to support a stimulation-planning exercise. Under conventional workflows, building and calibrating this number of models manually would have required several months of continuous engineering effort.

The primary challenge was scale rather than conceptual complexity. Engineers had to assemble heterogeneous data sets, construct well models, match FGS data, and generate plots for each well. Even at an optimistic rate of four to five wells per engineer per day, the task would have exceeded 1,000 engineering hours.

Using the agentic AI framework, the engineering team consolidated validated input data and initiated automated model construction and calibration in batch mode. Once validation rules were finalized, the entire population of approximately 600 wells was processed overnight.

From an operational perspective, the most significant outcome was the drastic reduction in engineering time. What would normally have required months of distributed manual effort was completed within a single day of supervised automation.

Engineers redirected efforts toward reviewing exceptions and integrating outputs into stimulation-planning workflows. Conservatively, the total time saved in this case study exceeded 700 engineering hours, without compromise in modeling consistency or quality.

Case Study 2: Tubing-Size Optimization and Liquid-Loading Diagnostics

The second case study involved three offshore fields containing a mixture of continuous gas-lift and self-flowing wells. The objective was to prepare calibrated well models and execute large-scale tubing-size sensitivity studies to evaluate liquid-loading behavior across the asset.

A total of 370 simulation scenarios were defined across the three fields. For each scenario, the agent constructed the calibrated base well model, applied the specified tubing configuration, and generated the corresponding PT and nodal analysis results.

All 370 simulations were executed automatically in under 1 hour of wall-clock time. The automated workflow reduced what would otherwise be a large manual modeling effort to only a few hours of supervisory review and result interpretation. The estimated net saving exceeded 320 engineering hours, while enabling comprehensive tubing-sensitivity evaluation across the full well population.

Results and Operational Impact

Across both case studies, the operational benefits of the agentic AI framework were realized primarily through engineering time compression and workflow scalability:

  • Labor impact—Large modeling exercises that previously required weeks or months were completed in hours. Combined savings across the two projects exceeded 1,000 engineering hours.
  • Scalable execution—The same workflow supported tens to hundreds of wells without modification.
  • Repeatability and auditability—Structured outputs and log files ensured complete traceability of results.

From an organizational standpoint, the framework changed how engineers interact with simulation. Instead of treating modeling as a bottleneck activity, it became a fast-turnaround analytical tool that could be applied repeatedly as field conditions evolve.

Implementation Lessons

Several practical lessons emerged during deployment:

  1. Automation succeeds only with disciplined data management. Early efforts focused heavily on standardizing units, naming conventions, and FGS metadata.
  2. Engineers must remain in the supervisory loop. The agent flags incomplete data, but final validation remains the responsibility of the production engineer.
  3. Transparency is essential for adoption. Engineers were more willing to trust the system once they could trace every step back to underlying Pipesim operations.
  4. The greatest benefits arise from high-volume, repetitive workflows such as bulk calibration and sensitivity analysis.

Conclusions

This study demonstrates that large-scale well modeling can be converted from a labor-intensive bottleneck into a rapid, scalable, and engineer-supervised workflow using an agentic AI framework. By integrating a domain-specific Python library with an AI-driven command-line agent operating on Pipesim, routine model construction, FGS-based calibration, and standard analyses are automated without loss of physics fidelity.

Across two offshore case studies involving more than 600 wells and 370 tubing-sensitivity scenarios, modeling turnaround time was reduced from months to hours, with cumulative savings exceeding 1,000 engineering hours. The framework enables engineers to focus on interpretation and decision-making rather than repetitive model building, providing a practical pathway to scale well-modeling workflows in large producing assets.

Aman Sharma, SPE, is an AI and cloud computing specialist with over 7 years of experience in machine learning, large language models, reinforcement learning, and cloud-native systems. He is currently an executive engineer in the research and development division of ONGC, Mumbai, India. His work focuses on developing production-grade AI solutions for industrial applications, including agentic AI solutions, retrieval-augmented generation knowledge systems, vision-language models, and optimization frameworks for oil and gas operations. His research interests include industrial AI, optimization algorithms, machine learning operations, and data-driven energy systems. He is a Google Cloud Platform and Amazon Web Services certified solutions architect and machine learning specialist, and a recipient of seven gold medals for engineering excellence from the National Institute of Technology, Prayagraj, India.