Dynamometer-Card Classification Uses Machine Learning
The complete paper explains the steps taken to improve surveillance of beam pumps using dynamometer-card data and machine-learning techniques and reviews lessons learned from executing the operator’s first artificial intelligence project.
In reciprocating rod pumping systems, analysis of dynamometer-card data can deliver valuable insight into the status of the pump, and can indicate if future action is required. The complete paper explains the steps taken to improve surveillance of beam pumps using dynamometer-card data and machine-learning techniques, and reviews lessons learned from executing the operator’s first artificial intelligence (AI) project.
The oldest and most widely used method of artificial lift is called beam pumping, or the sucker-rod lift method. A dynamometer is a device used on beam pumps that measures load on the polished rod (top) and plots the load in relation to the rod position as the pumping unit moves through a stroke cycle. This plot is known as the surface card.
A pump card is a plot of load vs. position on the pump’s plunger. The pump card is more useful for surveillance purposes because it filters effects of anything above the plunger and provides standard pump card shapes for interpreting pump operating conditions. Identification and diagnosis of beam pumps using the pump card is an expensive human visual-interpretation process, not only requiring significant labor time but also deep expertise in the domain.
Use of machine-learning techniques for pattern recognition can help automate the visual interpretation process, increasing efficiency and reducing maintenance activities resulting from missed early diagnosis.
Sucker-rod pumps are widely deployed in the Bahrain Field. There are two different types of communication layers used across the field, radio and optical fiber. Almost 300 optical-fiber-connected beam-pump units (BPUs) have been selected for surveillance and advanced analysis. Fiber is used to avoid any communication issues during data collection for the AI project. The sampling rate of collection for each pump is one reading per 20 seconds, thus covering three pump cards per minute. All BPUs are connected to a central server that converts hardware-communication protocol used by a programmable logic controller into the open-platform-communication (OPC) protocol.
Fig. 1 shows the details of the 209 values representing a single pump card being read from a BPU and stored in the operator database. During the data-collection period, a total of 5,380,163 pump cards from 297 BPUs were collected and stored in the database.
Data Labeling and Preparation
In machine learning, an important step after data collection is data labeling, which is usually performed manually by experts. The labeled data set is used to feed machine-learning algorithms that detect patterns specific to each class.
A total of 35,292 pump cards has been labeled (more than 1,000 cards per day on average). A useful feature of the labeling software is that it allows labeling an entire time period of cards in one shot, although such labeling should be performed with care.
Feature engineering is the process of using domain knowledge to create features that make machine-learning algorithms work. In the AI project, all 209 values collected as features from the labeled data set from the pump cards are used. Each feature is defined as a column in a structured format. The target (or label) is a classification of the type of card these 209 features represent, and is used in machine-learning classification later. No well names or any well-related information are used as a feature. Finally, all duplicate values are removed. The resulting data set after cleanup contains 22,298 cards. The paper provides a discussion of the training data set and the role it plays in the evaluation of each model’s performance.
Eight machine-learning models are trained on the data set and evaluated on the basis of cross-validation accuracy and holdout accuracy.
These models, ranked from top-performing to least-performing, include the following:
- Gradient-Boosted Machines (GBM) (Early Stopping). Gradient boosting is a machine-learning technique based on an ensemble of weak prediction models, in this case decision trees.
- eXtreme Gradient-Boosted Trees (XGBoost). XGBoost is an optimized distributed gradient-boosting library designed to be efficient, flexible, and portable. This model is similar to GBM fundamentally because both are gradient-boosting implementations.
- Regularized Logistic Regression. Logistic regression is a widely used statistical model that uses a logistic function to model a binary-dependent variable.
- Random Forest Classifier. Random forests are ensemble-learning algorithms for classification which work by building a multitude of decision trees during training and outputting the class that is the mode of classes of individual trees.
- Light Gradient-Boosted Trees (64 Leaves). This model is a fast, distributed high-performance gradient-boosting framework based on decision-tree algorithms used for classification.
- ExtraTrees Classifier. ExtraTrees is a variant of the Random Forest Classifier that introduces additional randomness.
- Light Gradient-Boosted Trees (16 Leaves). This model is similar to the algorithm in Model 5, but with the number of leaves set to 16.
- TensorFlow Deep-Learning Classifier. This model is a multilayer feed-forward neural network. Artificial neural networks, or connectionist systems, are computing systems inspired by the biological neural networks that constitute animal brains.
Alternative Deep-Learning Approaches
While great accuracy can be achieved with traditional machine-learning techniques, work in computer vision has improved dramatically in recent years. Specifically, convolutional neural networks (CNNs) perform well in computer-vision classification problems similar to those seen in pump-card classification. CNNs, a class of deep, feed-forward artificial neural networks, use a variation of multilayer perceptrons designed to require minimal preprocessing. The authors expect CNNs to perform well on the data set; however, they do not cover CNN approaches for classification in the paper.
The work provides insights to an organization with regard to best practices in executing an AI or machine-learning project. A critical contributing factor to the success achieved is the operator’s significant financial investment, building a digital oil field with a robust automation and communication infrastructure.
The standardization of tag naming also is critical to collection of data from supervisory control and data acquisition. Such a process allows for a quick, standardized approach to reading all pump cards of a specific provider from an OPC server using an OPC client. The paper emphasizes the necessity for exploration and production companies to invest in data acquisition and gathering and the exponential advancements of recent years in AI.
The authors suggest the following characteristics for initiating AI projects:
- Projects should be technically feasible, and able to be implemented internally or in collaboration with external resources.
- Projects need to have well-defined objectives. In the case of the study presented by the authors, creating business value and improving surveillance of over 800 BPUs with limited resources was a clear objective.
This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 194949, “Beam-Pump Dynamometer Card Classification Using Machine Learning,” by Sayed Ali Sharaf, Tatweer Petroleum; Patrick Bangert, SPE, Algorithmica Technologies; and Mohamed Fardan, Tatweer Petroleum, et al., prepared for the 2019 SPE Middle East Oil and Gas Show and Conference, Manama, Bahrain, 18–21 March. The paper has not been peer reviewed.