The topic of “the cloud” has attracted significant attention throughout the past few years (Cherry 2009; Sterling and Stark 2009) and, as a result, academics and trade journals have created several competing definitions of “cloud computing” (e.g., Motahari-Nezhad et al. 2009). Underpinning this article is the definition put forward by the US National Institute of Standards and Technology, which describes cloud computing as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction” (Garfinkel 2011, p. 3). Despite the lack of consensus about definitions, however, there is broad agreement on the growing demand for cloud computing. Some estimates suggest that spending on cloud-related technologies and services in the next few years may climb as high as USD 42 billion/year (Buyya et al. 2009).
Roster et al. (2010) identify three of the more commonly cited reasons for migrating functions and capabilities to the cloud:
- Cost Reduction. For example, insurance companies are seeking more flexible cost structures, and the cloud allows firms to shift costs to an operating expense rather than a capital outlay, thereby giving them a highly flexible “pay as you go” resource.
- Deployment Flexibility. In the media industry, for instance, the type of content that consumers request can suddenly become viral and then immediately fall out of fashion. Demand patterns can therefore swing profoundly from one extreme to the next. A cloud-based infrastructure affords companies in these industries a high degree of flexibility with regard to the amount of computing and data-storage resources that they need at any moment.
- Implementation Speed. Doctors in small medical practices usually do not have their own information technology (IT) departments, and cloud computing therefore holds appeal for such enterprises because the technical support bundled into cloud service packages provides the most value for these small businesses in the shortest possible time.
Despite the compelling case for moving toward the cloud, however, the absorption of these technologies and services has been uneven. Some industries—most notably, the financial services and telecommunications sectors—have been relatively quick adopters, while others have approached the cloud more cautiously and slowly.
The upstream oil and gas industry generally falls into the category of cautious adopters. Although there is considerable evidence that the upstream oil and gas sector has begun to move toward the cloud (Beckwith 2011), this progress has typically been in the form of private clouds rather than public ones (Feblowitz 2011), or hybridized solutions that mix cloud and existing noncloud IT resources (Mathieson and Triplett 2011). Therein lie the main objectives of this article. First, we will identify three popular business models that have emerged in the marketplace for delivering cloud-based resources and capabilities to customers. Second, we will identify the concerns and issues that have arisen within the upstream oil and gas industry in response to cloud computing. We will then show how many of these challenges have also been encountered in other industries, and use these examples to shine light on how these problems might be overcome in the oil and gas industry. Next, we will consolidate these emerging trends from other industries into a prediction: Whereas current cloud strategies in the oil and gas industry tend to be conservatively clustered around the concept of private clouds and hybridized cloud solutions, we believe that enabling technologies and conditions will fall into place in a way that makes the public cloud a far more attractive option for the upstream oil and gas industry in the years ahead. We will then conclude with a discussion about the implications of this projected shift toward the public cloud.
Delivery Service Models
Although new and inventive approaches to delivering cloud computing are still being experimented with, three dominant service models have emerged: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (Saas) (Jansen and Grance 2011). The most basic of these is IaaS, in which customers “can buy processing, storage and network services and then build their own systems on top of this infrastructure” (Garfinkel 2011, p. 3). Among the more useful features of IaaS is the fact that customers pay for functionality by the hour, making this an attractive option for customers who want to maintain some semblance of “business as unusual”—that is, situations in which the customer is simply filling in a short-term hardware deficiency. Customers might also be attracted to this model because of its near-infinite scalability and the fact that customers are not directly responsible for the management of the hardware used to provide the service. A customer would typically use IaaS if the stack of software that they wanted to use on the system was nonstandard. Amazon and Rackspace are considered market leaders in the IaaS space (Garfinkel 2011).
PaaS is “one step up: vendors provide preconfigured computers running operating systems and applications” (Garfinkel 2011, p. 3). This kind of service is typically more attractive for customers who require software and hardware combinations that are fairly standard, and for situations that require no bespoke configuration between hardware and base software such as operating systems. The customer then adds on top of this basic configuration a specific application such as a website. PaaS also frequently holds appeal for customers who are using older software that may be quirky or that has been highly customized over the years. In some of these situations, customers might even be using software sold to them by a vendor that is no longer in business. PaaS makes it possible for these kinds of customers to retain a high degree of control over the setup of the quirkier aspects of their system while relying on the PaaS service provider for the more standard elements. Amazon has a major presence in this market space, too, along with the Google App Engine and Microsoft’s Azure platform (Garfinkel 2011).
SaaS “is at the top of the cloud computing stack. Here the cloud providers have created applications running on server farms that may themselves be geographically distributed” (Garfinkel 2011, p. 3). Salesforce.com, Facebook, Flickr, eBay, and Amazon Marketplace are common examples of this approach (Garfinkel 2011). Microsoft’s Office 365 is another popular example: Whereas customers used to have to purchase software packages from Microsoft, Office 365 allows them to pay a monthly fee and rent the functionality that the software provides on a month-by-month basis (with version upgrades included automatically) rather than purchasing a copy of the software and owning it indefinitely (which often does not include version upgrades).
Problems in Moving to the Cloud
Feblowitz (2011, p. 32) articulates several of the concerns that have arisen as the upstream oil and gas sector moves toward the cloud:
There are several areas that are considered problematic. There is reluctance in the industry to have data stored outside of the firewall. There is a concern not only with intrusions that could compromise IT, but also with protection of trade secrets, especially when it comes to sensitive areas such as well logs. There is also an issue of scale for some applications. For example, much of exploration and production depends on 3D rendering and graphics accelerators, which have yet to make it off the workstation because of the size of the files and speed required for viewing. Another consideration is the sunk investment in legacy IT applications and infrastructure that the industry has already made.
There is no shortage of support for each of these points. Data security is clearly a priority for the upstream oil and gas industry (Yuan et al. 2011), and many of the new projects that have recently come onstream have layers of defense built into their internal IT infrastructure (e.g., Perrons 2010). Also, oil and gas applications frequently do use files and data sets that are simply too large to be shared across network boundaries without an uncomfortable amount of latency. The data sets generated by seismic surveys are particularly noteworthy in this regard. As Beckwith (2011, p. 44) suggests, today’s seismic data centers can contain as much as 20 petabytes (1 petabyte is equivalent to one billion megabytes) of information, which is equivalent to “926 times the size of the Library of Congress, lapping the Earth six times in a single continuous bookshelf.”
These technical realities of the industry bring about an important question: How can the upstream oil and gas sector yield as much benefit as possible from cloud-based technologies while working within these constraints? Private and hybrid clouds have emerged as popular solutions.
Types of Clouds
The broad range of cloud computing offerings in the marketplace can be divided into three basic types of architecture: public, private, and hybrid. Public clouds are typically set up by commercial providers that offer an Internet-accessible interface for creating and managing computing resources within their own physical domains (Sotomayor et al. 2009). The attractiveness of this concept lies in the fact that users can enjoy near-infinite scalability and very high system reliability. What’s more, because of their highly virtual nature, these resources can be procured competitively from a broad array of specialized vendors and IT companies almost anywhere in the world.
At the moment, however, the public cloud model sometimes comes with security risks. Private clouds are one alternative for managing and mitigating these kinds of threats. The objective of private clouds is “not to sell capacity over the Internet through publicly accessible interfaces, but to give local users a flexible and agile private infrastructure to run service workloads within their administrative domains. Private clouds can also support a hybrid cloud model by supplementing local infrastructure and computing capacity from an external public cloud” (Sotomayor et al. 2009, p. 15). In this way, hybrid clouds offer the best of both worlds insofar as this approach makes it possible to manage security-related threats carefully while creating a secure “pipe” through which customers can selectively leverage the scalability of the public cloud when and how they want to. Firms using SAP, a popular brand of enterprise software, have often been attracted over the past few years to a hybrid cloud strategy because the company was slow to release a cloud-friendly version (Greenbaum 2012; Hamm 2009). A hybrid cloud architecture allows customers to continue to use data stored in their large on-premise SAP installations and use these same data within online application software—like, for example, Microsoft’s Office 365—that is hosted in the public cloud.
Hybrid cloud solutions are a clever way to reap many of the benefits of the public cloud while maintaining a higher degree of control over data security, and they are therefore a very useful bridging technology that customers can use to move toward the public cloud while still hanging on to legacy systems or until software vendors can come up with cloud-friendly alternatives. But hybrid systems do come at a cost: They do not offer the near-infinite scalability, extremely high “outsourceability,” and cost efficiency that the totally public cloud does. It therefore follows that these mid-ground solutions do address some of the concerns and issues raised earlier about cloud computing—but they also curtail much of the additional value and functionality that public cloud technologies could potentially deliver. Different approaches have consequently emerged in other industries as they have tried to reach for the additional benefits that the public cloud can offer.
The Move in Other Industries
Health Care
Much like the upstream oil and gas industry, the health care sector is extremely data-intensive (Berndt et al. 2001), and has a strong incentive to leverage cloud-related technologies as much as possible. Medical records are frequently transferred among a large number of professionals working in different organizations, and the ability to share this information more seamlessly would clearly result in significant cost savings. Also, by reducing the potential for mistakes associated with passing information from one person to another, a more automatic and integrated data-exchange system may even save lives (Kamara and Lauter 2010). At the same time, however, these different pieces of information are very sensitive, and are subject to strict security protocols and regulations (Frenzel 2003). But the potential gains offered by the cloud were sufficiently attractive that TC3, a US-based health care services company with access to sensitive patient records and health care claims, moved several of their key applications to Amazon Web Services, a public cloud service provider (Armbrust et al. 2009). Moving these functions to the cloud involved transferring sensitive data that are legally protected by the United States Health Insurance Portability and Accountability Act. To ensure that the data are secure at all times, TC3 encrypts the data before placing them in the cloud. Armbrust et al. (2009, p. 15) justify this approach by contending that:
… there are no fundamental obstacles to making a cloud computing environment as secure as the vast majority of in-house IT environments,[*] and… many of the obstacles can be overcome immediately with well-understood technologies such as encrypted storage, Virtual Local Area Networks, and network middleboxes (e.g., firewalls, packet filters). For example, encrypting data before placing it in the cloud may even be more secure than unencrypted data in a local data center.
The TC3 example also sheds light on the jurisdictional dimensions of data management in the public cloud. Many types of data are subject to various controls, and several countries enforce laws that restrict attempts to transfer customer data, patient information, and copyrighted materials across international borders (Armbrust et al. 2009). It is therefore quite understandable that prospective users of the public cloud are sometimes hesitant to pursue this option precisely because it is difficult to ascertain exactly where one’s data are being physically stored (Kamara and Lauter 2010; Naone 2011).** Public cloud service providers are aware of these concerns, however, and have begun to offer servers and storage facilities in multiple legal jurisdictions. For example, both Amazon Web Services and Microsoft Azure have servers physically located in the United States and Europe, and both firms’ customers are welcome to keep data in either region based upon their particular needs and circumstances (Armbrust et al. 2009). Also, in addition to these attempts to manage jurisdictional issues within existing legal frameworks, scholars and legislators in several areas around the world are actively working to amend these kinds of rules to reflect the new realities of cloud computing (Jaeger et al. 2008; Kaufman 2009).
Retail
In earlier generations of the World Wide Web, Internet-based retail transactions were considerably more cumbersome and risky than they are today. Accepting credit card payments from strangers “required a contractual arrangement with a payment processing service such as VeriSign or Authorize.net; the arrangement was part of a larger business relationship, making it onerous for an individual or a very small business to accept credit cards online” (Armbrust et al. 2009, p. 6). A wholesale shift to cloud-style business models was clearly very difficult under these conditions. But the emergence of PayPal changed things quite considerably. The introduction of this market intermediary made it possible to accept credit card payments “with no contract, no long-term commitment, and only modest pay-as-you-go transaction fees” (Armbrust et al. 2009, p. 6). Retail transactions are consequently much easier to conduct in the cloud these days, and online retail sales have grown exponentially as a result.
It is not unreasonable to expect that similar kinds of market intermediaries or new enabling technologies may also appear to support the upstream oil and gas industry as it evolves and becomes increasingly reliant on information technologies. The industry has done a reasonably thorough job thus far of expressing its concerns about the transferring of sensitive data (e.g., Feblowitz 2011; Yuan et al. 2011) and, in light of the considerable economic impact of the industry (Yergin 1991) and the “size of the prize” that goes with this, someone in the market—perhaps an industry incumbent like an oilfield service company, or maybe a new entrant—may eventually rise to the challenge and offer solutions that reduce the barriers associated with sending this kind of sensitive data into increasingly public parts of the cloud. Data-encryption protocols are another potentially promising way to address these types of issues. The same dynamic forces that reshaped the retail sector may, with a bit of tweaking, be able to help the oil and gas industry get to the public cloud more quickly too.
Some data-security experts even go so far as to suggest that data can be safer in the public cloud than in the privately managed facilities of companies that do not specialize in IT. Jeremy Grossman, a former information security officer at Yahoo, argues that the “average enterprise, whether you’re talking small, medium, or the largest of the large—they’re in their respective businesses. A bank isn’t in the business of technology. A retailer isn’t in the business of managing IT infrastructure. A [cloud] service provider … [has] very particular skills at making really secure infrastructures” (Bergstein 2011, p. 20).
Scientific Research
A team of researchers at the Medical College of Wisconsin’s Biotechnology and Bioengineering Center has made significant headway in an extremely data-intensive area of science by using the public cloud. The team collects vast amounts of data generated by mass spectrometry instruments that determine the elemental composition and chemical structure of proteins expressed by organisms. The massive computational resources required for this kind of research would normally have made this undertaking far too expensive. Rather than capitulating in the face of this constraint, however, the team developed a purpose-built tool called ViPDAC (Virtual Proteomics Data Analysis Cluster) that made it possible to use less-expensive public cloud services to successfully manage the massive amounts of data and perform the complex calculations required in this research area (Sultan 2010).
An open-source software framework known as Hadoop is another important example of this principle. Hadoop is essentially an open-source architecture that can achieve massive computing power by subdividing massive computational problems, and then coordinating many servers while they work on their respective parts of the problem. This approach has been successfully used by Yahoo to manage up to 25 petabytes of enterprise data (Shvachko et al. 2010).
These precedents have important implications for the upstream oil and gas industry. As noted earlier, the large size of the industry’s software applications and data sets has been cited as a significant barrier for cloud computing (Feblowitz 2011). But in the face of a similarly vast amount of data, the team at the Medical College of Wisconsin developed a customized technical solution that made it possible to move enormous amounts of data and highly sophisticated computational tasks to the public cloud, and Yahoo used Hadoop to a similar end. It therefore follows that such a bridge to the public cloud may not be out of the question for the upstream oil and gas industry too.
In addition to this, the efficiency and speeds available in off-the-shelf computing technologies have advanced at a remarkable rate over the years (Grove 1996), and there is little evidence to suggest that this trend will abate anytime soon. Similarly impressive improvements are expected in the future with data-transfer latencies too (Armbrust et al. 2009). Thus, even if no customized, industry-specific solutions are put forward in the marketplace that address the oil and gas sector’s specific technical challenges, the macro-level evolutionary changes that will emerge throughout the IT landscape may at least partially lower the barriers that the industry is facing en route to the public cloud.
Conclusions and Implications
Although there are many differences between the upstream oil and gas industry and the three examples discussed here, a potentially useful theme emerges. In each instance, there were mitigating factors that initially made it difficult to move data and computational functions to the public cloud. The technical and logistical challenges facing each of these sectors were in many ways reminiscent of those currently facing the upstream oil and gas industry in its own journey toward cloud computing. But in all the examples, the problem was overcome by some kind of technological solution or a shift in the underpinning market conditions, and then each organization successfully moved mission-critical data and functions into the public cloud.
Microsoft is beginning to hear anecdotal evidence from its clients that points in this same direction. Several companies within sectors that have been more aggressive in moving to cloud computing than the oil and gas industry have openly started to wonder if they should re-engineer their IT strategies away from mid-ground solutions like private or hybrid clouds in favor of designs that more fully leverage the public cloud. We therefore submit that the upstream oil and gas sector will probably arrive at a similar inflection point in its own collective thinking in the years ahead.
This logical extension of emerging trends will almost certainly have consequences for the oil and gas industry right now—specifically, in terms of how assets and IT systems are designed. One of the more remarkable aspects of the oil and gas industry is the longevity of its assets. Production systems and installations frequently continue to produce for several decades (e.g., Pathak et al. 2004) and, as a result, seemingly innocuous design decisions about system architecture that are made in the early days of an asset sometimes have far-reaching consequences many years later. It goes without saying that the IT landscape of the upstream oil and gas industry will almost certainly look profoundly different two decades from now—but, alas, that is a source of uncertainty that today’s project planners and design teams cannot avoid.
The evidence presented here therefore makes a strong case in support of highly modular IT architectures that will be relatively easy and inexpensive to change in the future. Although private and hybrid cloud architectures are popular within the industry at the moment because of existing constraints, the examples presented in this paper point to a future that is increasingly predicated on the public cloud. We accordingly believe that companies within the upstream oil and gas industry—including international oil companies, national oil companies, service companies, and vendors—would be well advised to build into their systems enough flexibility and modularity to make this change when the time is right in the years ahead, thereby allowing them to take full advantage of the benefits that cloud computing can offer.
We also believe that this evidence demonstrates how cloud-based applications—widely known as “apps”—can play a larger role in the upstream oil and gas sector. The industry has been very slow to develop and adopt apps. Beckwith (2012, p. 41) points out that, “of the hundreds of thousands of apps now publicly available, only a few dozen are devoted to the [oil and gas] industry.” But the sector’s computing needs would clearly be better served by the introduction of more apps in the future. Whereas E&P companies currently tend to purchase entire software packages at considerable expense and then shuffle data from one package to the next for different types of analysis, calculation, and presentation, these same results could be achieved more seamlessly and efficiently with the use of cloud-based apps (“Web apps”). An engineer’s or operator’s data could be sent securely to large servers elsewhere that could perform the required calculations, and then come back via an app to the user with an answer that can be viewed on a tablet, smart phone, or computer. In this way, vendors could sell their customers what they actually need: the ability to make calculations and analyze data, and then view the results in their choice of location and on their choice of device. Users could pay for this kind of service on a per-use basis rather than buying expensive software packages that might not be required all the time. Moreover, by moving the more calculation-intensive parts of the work into the cloud, users would no longer require an expensive, high-powered workstation on their desk.
*Chen et al. (2010) bolster this by suggesting that “few cloud computing security issues are fundamentally new or fundamentally intractable.” (p. 1)
**It is worth noting, however, that the perceived risk associated with moving sensitive data to the public cloud varies quite significantly from one part of the world to the next. Tata Consultancy Services (2012) has found that “relative to their counterparts in Asia Pacific and Latin America, US and European companies were far less likely to put core applications in public clouds.”
Acknowledgments
A version of this article, titled “Public, Private, or Hybrid? What the Upstream Oil & Gas Industry Can Learn from Other Sectors about ‘The Cloud’” (paper SPE 161993-PP), was presented at the SPE Abu Dhabi International Petroleum Exhibition and Conference in Abu Dhabi, United Arab Emirates, 11–14 November 2012. Special thanks to John Bradley from Microsoft for his insights and ideas.
References
- Armbrust, M., Fox, A., Griffith, R., et al. 2009. Above the Clouds: A Berkeley View of Cloud Computing. Technical Report No. UCB/EECS-2009-28, University of California at Berkeley, Department of Electrical Engineering and Computer Sciences, Berkeley, California, USA.
- Beckwith, R. 2011. Managing Big Data: Cloud Computing and Co-Location Centers. J. Pet. Tech. 63 (10): 42–45.
- Beckwith, R. 2012. Apps and the Digital Oil Field. J. Pet. Tech. 64 (7): 40–46.
- Bergstein, B. 2011. Being Smart About Cloud Security. “Business Impact” report series published as a standalone document in October 2011 by Technology Review, p. 20.
- Berndt, D.J., Fisher, J.W., Hevner, A.R., et al. 2001. Healthcare Data Warehousing and Quality Assurance. Computer 34 (12): 56–65.
- Bhatt, A. 2011. Understanding Cloud Computing in Detail. Retrieved 4 January 2012 from http://www.articlesbase.com/computers-articles/understanding-cloud-computing-in-detail-4380872.html.
- Buyya, R., Pandey, S., and Vecchiola, C. 2009. Cloudbus Toolkit for Market-Oriented Cloud Computing. In Lecture Notes in Computer Science, eds. M.G. Jaatun, G. Zhao and C. Rong, Vol. 5931, pp. 24–44. Berlin: Springer-Verlag.
- Chen, Y., Paxson, V., and Katz, R.H. 2010. What’s New About Cloud Computing Security? Technical Report No. UCB/EECS-2010-5, University of California at Berkeley, Department of Electrical Engineering and Computer Sciences, Berkeley, California, USA.
- Cherry, S. 2009. Forecast for Cloud Computing: Up, Up, and Away. IEEE Spectrum 46 (10): 68.
- Feblowitz, J. 2011. Oil and Gas: Into the Cloud? J. Pet. Tech. 63 (5): 32–33.
- Frenzel, J.C. 2003. Data Security Issues Arising from Integration of Wireless Access into Healthcare Networks. J. Med. Sys. 27 (2): 163–175.
- Garfinkel, S.L. 2011. Cloud Computing Defined. “Business Impact” report series published as a standalone document in October 2011 by Technology Review, p. 3.
- Greenbaum, J. 2012. SAP Energizes Its Cloud Strategy. InformationWeek, published online on 24 May 2012, retrieved 1 September 2012 from http://www.informationweek.com/software/enterprise-applications/sap-energizes-its-cloud-strategy/240000943.
- Grove, A.S. 1996. Only the Paranoid Survive: How to Exploit the Crisis Points That Challenge Every Company and Career. New York: Currency-Doubleday.
- Hamm, S. 2009. Clouds on SAP’s Horizon. BusinessWeek 4130 (May 11): 52–53.
- Jaeger, P.T., Lin, J., and Grimes, J.M. 2008. Cloud Computing and Information Policy: Computing in a Policy Cloud? J. Inform. Tech. & Polit. 5 (3): 269–283.
- Jansen, W. and Grance, T. 2011. Guidelines on Security and Privacy in Public Cloud Computing. Special Publication No. 800-144, Computer Security Division, Information Technology Laboratory, National Institute of Standards and Technology, US Department of Commerce, Gaithersburg, Maryland.
- Kamara, S. and Lauter, K. 2010. Cryptographic Cloud Storage. In Lecture Notes in Computer Science: Financial Cryptography and Data Security, eds. R. Sion et al. Vol. 6054, pp. 136–149. Berlin: Springer-Verlag.
- Kaufman, L.M. 2009. Data Security in the World of Cloud Computing. IEEE Security & Privacy 7 (4): 61–64.
- Mathieson, D. and Triplett, C. 2011. Are There Clouds in Our Blue Sky Research Programs? J. Pet. Tech. 63 (9): 16–18.
- Motahari-Nezhad, H.R., Stephenson, B., and Singhal, S. 2009. Outsourcing Business to Cloud Computing Services: Opportunities and Challenges. HP Laboratories Paper No. HPL-2009-23, Hewlett Packard Labs, Palo Alto, California, USA.
- Naone, E. 2011. Transcending Borders, But Not Laws. “Business Impact” report series published as a standalone document in October 2011 by Technology Review, p. 15.
- Pathak, P., Fidra, Y., Avida, H., et al. 2004. The Arun Gas Field in Indonesia: Resource Management of a Mature Field. Paper SPE 87042-MS presented at the SPE Asia Pacific Conference on Integrated Modelling for Asset Management, Kuala Lumpur, Malaysia, 29–30 March.
- Perrons, R.K. 2010. Perdido Ties Together Shell Digital Oilfield Technologies. World Oil 231 (5): 43–49.
- Roster, J., Moore, C., and Pfeiler, K. 2010. Who Really Cares About the Cloud? An Industry Perspective. Paper presented at the Gartner webinar, 24 August.
- Shvachko, K., Kuang, H., Radia, S., et al. 2010. The Hadoop Distributed File System. Paper presented at the IEEE Conference on Mass Storage Systems and Technologies, Incline Village, Nevada, USA, 3–7 May.
- Sotomayor, B., Montero, R.S., Llorente, I.M., et al. 2009. Virtual Infrastructure Management in Private and Hybrid Clouds. IEEE Internet Comput. 13 (5): 14–22.
- Sterling, T. and Stark, D. 2009. A High-Performance Computing Forecast: Partly Cloudy. Comput. Sci. Eng. 11 (4): 42–49.
- Sultan, N. 2010. Cloud Computing for Education: A New Dawn? Intl. J. Inform. Manag. 30 (2): 109–116.
- Tata Consultancy Services. 2012. The State of Cloud Application Adoption in Large Enterprises: A TCS Global Trend Study–March 2012.
- Yergin, D. 1991. The Prize: The Epic Quest for Oil, Money, and Power. New York: Touchstone.
- Yuan, H., Paul, D., and Mahdavi, M. 2011. Security: Digital Oil Field or Digital Nightmare? J. Pet. Tech. 63 (8): 16–18.
About the Author
Robert K. Perrons, SPE, joined Queensland University of Technology as an associate professor in 2011 after working in a wide variety of roles and locations for Shell International’s E&P division. He started his career in Shell’s Strategy and Economics team in 1997, and then worked for several years as a production engineer in the company’s overseas operations (offshore and onshore). He then left Shell for 3 years to work as an industrial research fellow at the University of Cambridge, and rejoined Shell in 2004 to become the company’s executive coordinator of R&D. He earned a BEng in mechanical engineering from McMaster University, an SM degree in technology and policy from the Massachusetts Institute of Technology, and a PhD in engineering from the University of Cambridge, where he was a Gates Cambridge Scholar. He is a chartered engineer, and an affiliated researcher at the University of Cambridge Centre for Strategy and Performance.
Adam Hems, SPE, began his career in the United Kingdom in the 1990s developing Internet applications and commerce-related websites. He moved to Texas in 2000 to work as a consultant for a broad range of E&P companies, and then began working as a senior consultant for Microsoft in 2005. Since 2011, he has been Microsoft’s technical strategist for the oil and gas and mining industries. He earned an engineering degree from Coventry University in the UK.