Reservoir simulation

Production-Optimization Strategy Using a Hybrid Genetic Algorithm

The optimization algorithm used in this work is a hybrid genetic algorithm (HGA), which is the combination of GAs with artificial neural networks (ANNs) and evolution strategies (ESs).

Neural network
Getty Images

The optimization algorithm used in this work is a hybrid genetic algorithm (HGA), which is the combination of GAs with artificial neural networks (ANNs) and evolution strategies (ESs). This HGA attempts to simplify the complex and diverse parameters governing the production-optimization problem. The HGA is coupled with a commercial simulator and has been applied to real fields to quantify the benefits of this HGA over a base case with the conventional GA.

Background

GAs. GAs are part of a larger group of methods in artificial intelligence (AI) called evolutionary computation. These methods are inspired by natural evolution in biology. The GA has been well-recognized as an optimization method that has the ability to work in a solution space with nonsmooth and nonlinear topology, where traditional methods generally fail. Several entities that make up the building blocks of GAs have their direct counterpart in nature. Populations, individuals and their fitness, generations, and genomes are all present both in nature and in GAs. A detailed discussion of the GA method is provided in the complete paper.

ANNs. ANNs are a method in AI inspired by brain structure and function. The method aims to interpret the functions, processes, simulators, or similar artifacts that produce input/output patterns by learning by use of given training points. The learning process for the ANN in this study uses a back-propagation (BP) algorithm. After learning from training data has been accomplished by an ANN, the system will receive input, process it, and provide the output. Many variants and types of ANNs exist. In ANNs, there are some nodes that receive inputs, some nodes that provide output, and hidden nodes in between. Neural networks are composed of nodes or units connected by directed links.

Learning in an ANN is typically accomplished through use of examples. This is also called “training” in ANN because the learning is achieved by adjusting the weights iteratively so that a trained ANN can perform well to interpret the function by use of testing points. The most common method in training the ANN is the BP method.

ESs. ESs are a major class of evolutionary algorithms (EAs) (EAs includes evolutionary programming, GAs, and ESs). ESs are similar to GAs as optimizers; the main difference is that ESs focus only on mutations as the operator. This distinction makes ESs more powerful than GAs in dealing with large, complex problems that generate many local optima. ESs can also deal with problems where no explicit or exact objective function is available.

ESs start with a population of candidates of trial solutions. At each iteration (generation), ESs update each individual in the population by use of control parameters called strategy parameters. This updating process is called the ES mutation operation.

Evolution in Artificial Neural Networks (EANNs)

In this study, the authors attempted to generalize the HGA so that it could be used to optimize any kind of reservoir model automatically. An alternative possibility in reaching this goal is revealed by exploring possible benefits arising from the combination of two augmented algorithms, which are between ANN and EA.

Evolution has been introduced into the ANN at roughly three different levels: connection weights, architectures, and learning rules. The evolution of connection weights introduces an adaptive and global approach to training, especially in the reinforcement-learning and ­recurrent-network-learning paradigms, where gradient-based training algorithms often experience difficulties. The evolution of architectures enables ANNs to adapt their topologies to different tasks without human intervention and thus provides an approach to automatic ANN design because both ANN connection weights and structures can be evolved.

Methodology

The optimization process proposed in this study is based on a GA coupled with an ANN and ESs. The ANN and ESs are mainly used to reduce the number of simulations; thus, the process becomes time-efficient. It is known that the architecture of an ANN determines the information-processing capability of the ANN. This results in the algorithm becoming ­problem-dependent. This algorithm searches the near-optimal sets of production variables to optimize the objective function, which is net present value (NPV). The process work flow is shown in Fig. 1.

jpt-2016-12-production1612focusfig1.jpg
Fig. 1: HGA flow chart.

 

HGA Structure/Work Flow. Initial Population Generation.In this first step, the input variables are encoded into binary strings to form a genotype. The initial population is then ready to be generated, either randomly or by use of algorithms that ensure appropriate distribution of the population. In this work, the initial population is generated using the ­latin-hypercube-design (LHD) algorithm. Choosing initial population randomly often causes inefficiency of the algorithm. To ensure that the initial population is generated properly and has appropriate distribution, the LHD method is used.

Evaluation. After the initial population has been generated, all of the individuals are evaluated. The evaluation of the individuals is performed with a commercial simulator. After simulation is completed, the result of the objective function is returned to the program for continuation of the algorithm process. Evaluation for each individual is called fitness value. Results of the evaluation are then ranked from best to worst.

Database. During the optimization process, all of the evaluation results using the simulator will be stored in the database. This storage process aims to prevent multiple evaluations of the same inputs using the simulator.

Training EANN. The input points from the database are then used as training points and validation points. In order to prevent overfitting (overtraining), the cross-validation algorithm is used.

Reproduction. The reproduction module of the program uses two mechanisms, crossover and mutation. The crossover method selected is single-point crossover, for its simplicity and the lack of evidence that multiple-point crossover improves the algorithm. Mating crossovers are chosen randomly on the basis of the crossover rate between the individuals. The mutations are applied in conjunction with the crossover process.

Elitism and Selection of New Population for the Next Generation. Elitism is mainly used to ensure that the best individuals of the previous generation will survive to the next generation. In this program, 20% of the best individuals from the previous generation will automatically become the individuals for the next generation. In this way, every new generation will be at least as good as the previous one.

Application

The HGA is then applied to a realistic reservoir model with an area of 14 km2 to demonstrate the application of the methodology (field characteristics are provided in the complete paper).

The optimization process was carried out for solving a production-strategy problem for vertical wells. The HGA will be used to optimize the NPV with production variables of well placement, production and injection rates, and scheduling of conversion of production wells to injection wells.

The optimization process is divided into two main schemes: (1) optimizing the well placement for 10 production wells and (2) optimizing the scheduling of conversion of production wells to injection wells, considering the rates. The first scheme aims to locate the 10 production wells to optimize the cumulative oil production. After optimizing the first scheme, there exist 10 production wells with varied rates. The duration of the contract is 19 years. Production management decides to convert production wells to injection wells after 5 years of production in order to enhance the recovery of oil. The number of injection wells and the time at which they will be activated are defined by use of the optimizer. The rate of injection also would be optimized, together with two former variables. The constraints are as follows: The rate of injection is between 100 and 1,500 BWPD, the range of conversion is between the sixth and the 19th year, the time span between the conversion of one well to another is 1 year, and maximum water cut is 95% (producers were shut in if this constraint was violated).

In this optimization process, the crossover function was set to 0.6 and the mutation rate was set to 0.3. The number of the population for each generation is N=10. For the first scheme, the evaluation was performed by reservoir simulator to obtain the cumulative oil production; for the second scheme, the objective function was the NPV.

Data and Results

Scheme 1: Well-Placement Optimization of 10 Production Wells. Applying the previously described realistic reservoir model, the optimization using HGA was conducted. The result shows that, after running 604 simulations at the 64th generation, the optimal fitness value was obtained. Total time needed for the whole optimization process using this HGA was 6.5 hours. The conventional GAs were then implemented to optimize the same problem; 640 simulations at the 64th generation were required to reach the optimal fitness value of 88 million STB. This confirms the superiority of the HGA in this study.

Scheme 2: Optimizing the Scheduling of Conversion of Production Wells to Injection Wells. The position of 10 existing production wells is plotted at the fifth year of production. The NPV is determined to be USD 290 million by converting Production Wells 1, 2, and 10 at Years 12, 13, and 14, respectively. The rate of injection for Wells 1 and 2 is 900 BWPD, while for Well 10, the rate is 1,500 BWPD. If the field was not optimized (conversion of production wells to injection wells is not taken into account), the economic calculation determines the NPV to be USD 275 million. Therefore, use of the HGA optimization enhances the recovery significantly, by approximately USD 15 million.

This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 177442, “Production-Optimization Strategy Using a Hybrid Genetic Algorithm,” by Damian Dion Salam, Irwan Gunardi, and Amega Yasutra, Bandung Institute of Technology, prepared for the 2015 Abu Dhabi International Petroleum Exhibition and Conference, Abu Dhabi, 9–12 November. The paper has not been peer reviewed.