Human resources

Explaining Human Judgment Failures: Heuristics and Biases from the Laboratory to the Field

This regular column addresses the social side of engineering. We are pleased to include an article written by two of the more prominent names in decision-theory research. Dale Griffin and Thomas Gilovich describe interesting and important insights from the study of heuristics and biases.

ogf-2013-02-culturehero.jpg

This regular column addresses the social side of engineering. We are pleased to include an article written by two of the more prominent names in decision-theory research. Dale Griffin and Thomas Gilovich describe interesting and important insights from the study of heuristics and biases.

Their views are in apparent conflict with an earlier article in this column by Gary Klein.

I encourage you to read/reread Klein’s column after you read the one below. Which view resonates with you? Which is more relevant to engineering or operations work? We will return to this subject in a future column with other views and opinions.

Howard Duhon, GATE, Oil and Gas Facilities Editorial Board


In a Culture Matters article (Oil and Gas Facilities, February 2012), psychologist Gary Klein explained how judgment in an emergency situation could be compromised by too much thought. He made the convincing case that in time-pressured situations, such as responding to a gas leak in a transport vessel, swift intuitive judgment that operates by an unconscious pattern matching may be better left unfettered by conscious deliberation, which operates by rules and regulations. 

We present the counterpoint—a different view of intuition that is supported by considerable research. In particular, we argue that most failures of judgment in upstream oil and gas planning result from too little deliberation and too much acceptance of swift intuitive solutions.

Our more pessimistic view of human judgment, known in psychology as the heuristics and biases (H&B) approach, seems to match some important perspectives within the oil and gas industry. Reports from the Independent Project Analysis database indicate that 1 in 8 major projects are a planning disaster. According to Stephen Begg of the Australian School of Petroleum, the “vast majority of projects take longer, cost more, and produce less than predicted.” 

This dismal summary is consistent with findings from other areas of planning and forecasting: Megaprojects of all kinds, both private and public sector, are beset by time and cost overruns, and lower than predicted profitability (Flyvbjerg et al. 2003). As a result, prominent academics and commentators in the oil and gas industry have looked to psychology to understand the depth of these challenges. 

As an SPE Distinguished Lecturer in 2011, Begg presented his course on “Reliability of Expert Judgments and Uncertainty Assessments.” He said a major cause of the pattern of unfortunate outcomes within the oil and gas industry is that “people grossly underestimate uncertainty.”  He and other scholars of engineering judgment drew upon the research of psychological H&B to explain the trouble with uncertainty and to develop methods to overcome it. 

We will briefly review the history of the H&B tradition and explore its relevance to the oil and gas industry. 

Making Sense of Perceptual Illusions

Daniel Kahneman’s research that led to his award of the 2002 Nobel Prize in economic sciences started in the 1960s with a guiding metaphor: Mistakes of human judgment have much in common with visual (perceptual) illusions. 
The logic of studying perceptual illusions is that the failures rather than the successes of a system can better help diagnose the rules that the system follows. 

ogf-2013-02-fig1culture.jpg
Fig. 1—An example of a perceptual construction.

Consider the moon illusion: the full moon looks enormous on the horizon, but more modestly sized when high in the sky. There is little to learn from the constancy of its perceived size along the long arc of the overhead sky; however, its illusory magnification when it sits on the horizon provides insight about the way that the visual system uses contextual detail to compute perceived distance and, hence, perceived size. The study of perception in psychology has led to an understanding of how the perceptual system effortlessly and without awareness creates a complete image from incomplete and uncertain information.

An example of this perceptual construction is presented in Fig. 1. The three Pac-Man-shaped circles create the perception of a white triangle floating above the page. Visual illusions teach us that our brains run on “automatic”—our conscious mind receives the output of unconscious processes, but we have no way to know for sure whether our internal perceptions are correct or not—that is, unless we check our perceptual conclusion with an objective measurement device, such as a ruler. 

Kahneman and his colleague, Amos Tversky, argued that as the brain evolved, existing processes in perceptual analysis were co-opted as tools for higher level cognitive processing and led to the automatic, effortless processing that characterizes human intuitive judgment. 

Shane Frederick of Yale University has collected a number of puzzles as illustrations of cognitive illusions to show the automatic nature of human judgment. Consider the problem: 

In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?

For most people, there is an immediate, automatic tendency to divide 48 days by 2 in solving the time for the patch to cover half the lake. You probably experienced an immediate pull toward the answer 24 and then, recognizing the trick in the question, you might have overridden this pull with a logical analysis that took some effort.   

The correct answer (47 days) is found by less than 30% of students sampled at typical US state universities, and about 50% of students sampled at elite and highly selective private US universities. The poor performance is explained because our unconscious cognitive processes (called System 1) are always active whereas our deliberative processes (System 2) need to be cued into action. 

Heuristics Guide Judgment

Early research pointed to three powerful heuristics that guide judgment. One is the representativeness heuristic, which involves automatic assessments of the fit between evidence and outcome.

A classic demonstration of this heuristic involved asking participants to bet on one of two outcomes obtained in 20 rolls of a die that has two red faces and four green faces: 1) RGRRR, or 2) GRGRRR. Which pattern is more likely?

Sixty-five percent of participants risked USD 25 on the second pattern (GRGRRR). Do you see why this is irrational?  The first sequence is a proper subset of the second, and so it is necessarily more likely to occur. But the second sequence, with its alternations, is more representative of a random sequence with a 2/6 chance of a red face. In the language of perception, the second sequence “looks” more likely; it “fits” our mental prototype of a random sequence with a one-third probability of red and a two-thirds probability of green.

The second type of heuristic is availability heuristic, which involves an assessment of the ease with which an example or solution comes to mind. For example, words of the form, _ _ _ _ n _, are estimated to be less frequent than words of the form, _ _ _ ing. It is naturally easier to retrieve words ending in “ing” than words with an “n” in the penultimate position, but again this is a misleading search cue because the second set of words is a subset of the first. Therefore, most respondents who choose “_ _ _ing” as the more frequent set of words are incorrect. The second form “feels” more likely and is thus judged to be more frequent.

These two quick and dirty heuristics often serve us well, but they nonetheless lead to a general tendency to favor forecasts or predictions that are highly detailed. Why is this a problem? Highly detailed plans for the future are convincing: They “look” and “feel” like good representations of the future, but each additional detail makes that planning sequence increasingly unlikely to work out in that manner.

This was demonstrated when expert forecasters were asked to rate either the probability that the next year would show “a 30% drop in the consumption of oil in the US,” or the probability that the next year would show “a dramatic increase in oil prices and a 30% drop in the consumption of oil in the US.” As expected, the double-barreled scenario (price increase plus a drop in demand) was seen as more probable than the basic drop in demand. But, logically, it cannot be.

The Planning Fallacy

The operation of the representativeness and availability heuristics help explain the prevalence of what Kahneman and Tversky termed the “planning fallacy.” This is the well-documented tendency to treat each new project as a unique enterprise, leading planners to confidently forecast that the project will unfold as planned even with full awareness that few past projects have been completed on time and on budget. 

A basic principle of statistical thinking combines base rates of success—the probability of success derived from past projects in the same and similar domains—with the projected outcome based on the plans for the current project. If the projections from the planning phase have turned out to be highly accurate in the past, then they can be weighted fairly heavily in determining the final forecasts compared with the relevant base rates.

However, in areas where plans are poor predictors of final outcomes, as is typical, the base rates of success should weigh more heavily than the specific current plans. But intuitive prediction works on a heuristic basis, and intuitive forecasts focus exclusively on the evaluation of current plans and neglect base rates, thus leading to the planning fallacy in nearly every area of project planning that has been examined.

A third common heuristic of intuitive judgment is the anchoring and adjustment heuristic. People anchor on a suggested number (e.g., “Do you think there are more or less than 573.9 billion bbl of proven oil reserves?”) and then adjust only slightly away from the suggested number (e.g., “There’s more than that—probably at least 650 billion bbl.”). 

When Beggs and his colleagues gave this test to petroleum industry experts, those given the 573.9 billion bbl anchor estimated an average total of 682 billion bbl; those given a 1,722 billion bbl anchor estimated an average total of 1,932 billion bbl.

This tendency to mentally stick too closely to anchor values also gives rise to interval overconfidence: the tendency to give confidence intervals that are clustered too tightly around the central value. A 90% confidence interval around a forecast value (e.g., the price of oil in 5 years’ time) should, of course, capture the actual outcome value in 90% of such forecasts. However, a vast collection of such forecasts across many industries and domains finds that 90% confidence intervals include the correct outcome in about 30% to 40% of forecast estimates. The rule of thumb based on studies of oil- and gas-related forecasts is that both 80% and 99% confidence intervals miss about 50% of the observed values—hardly a cause for celebration.

Using Logic to Temper Intuition

How does the study of these underlying heuristics help us make better judgments and help us to avoid the planning fallacy and interval overconfidence? A forecaster’s heuristic assessments, such as perceptual illusions, are compelling, so avoiding bias is not easy.
A fundamental insight of the H&B research is that the strength of an intuition can be a poor guide to its accuracy. Thus, in major decisions, intuition must be tested and tempered by logic. 

A well-tested solution to the planning fallacy is the technique of reference class forecasting: A set of relevant similar projects with known outcomes are used to set a likely range of outcomes for the current project. The unique (and probably optimistic) plans for the current project should be relied upon over the reference class solution only with caution and due deliberation. 

For the ubiquitous bias of interval overconfidence, it is necessary to remove the focus on the “best guess” anchor when setting the confidence interval for a forecast value. This can be done in two ways: first, the forecaster can assess the lower and upper tails of the distribution without regard to its center, the best guess value; second, the forecaster can be asked to provide reasons why the values might be higher than the initial upper bound or lower than the initial lower bound. This forces the forecaster to consider the set of unlikely tail possibilities that might occur, but are rarely considered in light of the best guess anchor.

The talk of holding back and restraining intuition naturally invites the skeptical response that intuition and heuristics have been taken out of forecasting through the increasing prominence of computer-based prediction models. However, almost all computer-aided forecasts still use human judgment as an input to some key steps. Neither expertise nor computer processing power will prevent judgment errors until the human operator becomes infallible or aware enough of his or her own fallibility to correct for the biases induced by intuitive heuristics.  In our view, awareness is a more viable goal than infallibility. 
 

For Further Reading

Begg, Stephen. 2011. Reliability of Expert Judgments and Uncertainty Assessments. Presented as an SPE Distinguished Lecture during the 2010–2011 season. 
Flyvbjerg, B., Bruzelius, B., Rothengatter, W. 2003. Megaprojects and Risk: An Anatomy of Ambition. United Kingdom: Cambridge University Press.


Dale Griffin is an associate dean and a professor of marketing at the Sauder School of Business at the University of British Columbia. He holds a BA degree in psychology from the University of British Columbia and a PhD in psychology from Stanford University.

Thomas Gilovich is a professor of psychology at Cornell University and codirector of the Cornell Center for Be-havioral Economics and Decision Research. He holds a BA degree in psychology from the University of California and a PhD in psychology from Stanford University.