Digital Transformation

Analytics Moves to Midstream: AI Opportunities Away From the Wellhead

The AI revolution in the market for consumer goods by companies like Amazon and Alibaba led to significant changes in the market dynamics. Similar impacts can be expected in the midstream industry as the AI revolution unfolds there.

twa-2017-02-iot-hero.jpg
Source: GE

Artificial intelligence (AI) and big data analytics have the reputation of being very recent phenomena, and the oil and gas industry can have a reputation as being a laggard when it comes to new technology adoption. However, analysis of large datasets and AI-driven insights have already enjoyed relatively broad and early adoption in exploration and production. In fact, exploration and production may have been one of the first large-scale industrial uses of big data, yielding dramatic improvements in the efficiency of production. For example, the “dry hole” rate (the rate at which a new well fails to achieve commercially viable production) has fallen from about 40% in the 1960s to around 10% today. This improvement in efficiency, leading to a significant reduction in exploration costs, is largely the result of improved analytics on seismic, satellite, and other data sources extracted from the basin. Similarly, the productivity of wells has increased as new analytical tools guide improving completion technologies, including hydraulic fracturing, horizontal drilling, and artificial lift technologies.

In contrast, AI and big data analytics have not enjoyed nearly the same adoption in the midstream sector, even though there are several significant opportunities. In a lot of ways, the midstream sector looks like other distribution and logistics markets. You have many producers making a product, many buyers (refineries) looking to consume a product and the flow of goods and information controlled by middlemen (midstream). With AI, Amazon was able to disrupt the supply chain for consumer goods by making it a lot easier for manufacturers making a particular product to connect directly with the buyers most interested in that particular niche. As their size and market power grew, they were also able to use AI to make significant improvements in the efficiency of transportation and storage networks that support this commerce.

Many of these problems, well suited for data analytics and AI, have very close analogs in midstream sector of the oil and gas business. However, the midstream supply chain has one property that is distinct from consumer packaged goods: Every time shipments are consolidated, their composition is irreversibly altered because the liquid commodities physically mix. As a result, the quality of a barrel of oil, and therefore its value is continuously changing as it moves through the supply chain.

Poor transparency around quality, and how it changes through midstream supply chain, is a big problem for the industry. Refineries frequently encounter shipments whose quality differs significantly from what they expected. Midstream companies and producers can face significant losses in the value of their product through poorly managed quality. Improving the transparency in the oil supply chain on product quality, both within a company’s operations and between counterparties in the market, is a significant opportunity for AI to make an impact in the midstream sector.

Perhaps surprisingly, improving quality management may not require the deployment of that many new types of analytical devices into the field. Today, there are already many analytical devices deployed on the market that generate data on product quality in real time. The main barriers to better quality management are data isolation and poor data management. First it is common for data generated at local field labs and online analyzers to be localized at the field site (e.g., on a local SCADA system) and not easily accessible to most people in an organization. Connecting these devices to a centralized cloud-based data repository makes the results accessible to everyone and makes it possible to cross-correlate results between different test methods.

Only when these data are centralized does it become possible to run any kind of analytics on it. This is why we connect all of our customer’s testing infrastructure to the cloud as part of our quality management offering at my company. Once that is accomplished the analytics focuses on quantifying the uncertainty and better testing:

  1. Quantifying uncertainty in quality: When blending product, it is just as important to know the uncertainty in a measurement as it is to know the result. With a poor understanding of uncertainty, companies either increase their risk of blending outside of the specification limits—leading to shipments being rejected or deeply discounted, or blending too conservatively below these limits, leading to “quality giveaway.”

  2. Reducing uncertainty in product quality through better testing: Facility-wide analytics on quality testing data often identifies ways to adjust testing methodology and testing schedules to significantly reduce the uncertainty without increasing the cost of operations. For example, a counterintuitive finding we have encountered frequently is that the time variability of a given quality parameter (e.g., density) is much larger than the expected error in even the crudest test method. Thus clients often benefit from spot sampling programs that do the cheapest, least accurate test method way more frequently, rather than doing a more expensive method less often, reducing the overall uncertainty in quality numbers.

Quantifying and then reducing quality uncertainty of incoming and outgoing streams allows risk and opportunity in a blend plan to be optimally balanced, and has led to several dollars per barrel improvements in output quality.

Marketwide Impacts of Data-Driven Quality Management

The AI revolution in the market for consumer goods, led by companies like Amazon and Alibaba, led to significant changes in the overall dynamics of the market. I suspect to see similar impacts from an AI revolution in the midstream sector of the oil and gas market. Here are a couple examples:

  1. I expect to see more direct interaction between refineries and producers, with refineries becoming more active in the management of product quality throughout the entire transportation network. You can already see the beginning of some of these changes. For example, some refineries in the US have started trucking barrels directly from producer sites, avoiding pipeline networks entirely, to gain more certainty around product quality.

  2. Transactions involving physical transfer of products will speed up significantly. Right now, it is quite common for crude shippers to be paid for a shipment up to one month after the product was delivered. Much of this delay is a result of the time it takes to get lab test results back. As more sophisticated models are developed that can predict product quality within a narrow range from real-time measurements in the field, buyers will be able to gain advantage in the market by paying sellers immediately upon delivery, with perhaps only a small amount withheld until the lab results are in to account for residual uncertainty. This development would instantly save producers several dollars per barrel in interest payments because it would eliminate the needs for them to carry debt on their balance sheets while they wait for payment.

The Impacts of the Data Revolution on the Careers of YPs

Young professionals (YPs) are bombarded these days by a lot of opinions as to how their career will be upended by technology in the next few decades, and oil and gas is no exception. However, I think the idea that most jobs will disappear is overblown. The main change I see is that basic literacy in computer science and statistics is becoming a required skill for everyone rather than an elective specialty that only a few professionals need to know.

Reading provides a good analogy. When deciding whether to learn to read and write, no one today asks first whether they aspire to become a writer. Reading and writing have just become basic tasks that everyone has woven into their day-to-day jobs. However, this was not true before the advent of the printing press. At that time, most jobs (e.g., farming) did not require literacy and learning to read was considered highly specialized training people did for only a select number of jobs. The same thing is happening with statistical skills and computer skills today. For the new generation, becoming fluent in these areas should be thought of as a necessity, not a choice.

However, I doubt that expertise in statistics is going to restrict the diversity of careers. Not everyone will become a data scientist. There will still be engineers and marketers, salespeople and human resources professionals. But all of these jobs will need to be comfortable with data and code, just as all these professionals know how to read and write today.


ian-burgess-2018.jpg

Ian Burgess is the chief technology officer at Validere, a company that builds quality intelligence using artificial intelligence and Internet of Things for oil and gas blending, logistics, and trading. Burges is an interdisciplinary scientist whose inventions have been recognized with an R&D 100 Award and his articles are featured in several science publications. He holds a PhD in applied physics from Harvard University and a bachelor’s degree in math and physics from the University of Waterloo.