Data & Analytics

MEMTS Technical Section Q&A: Data, Models, and Trust—The Next Frontier in Methane Emissions Quantification

This article is the first in a Q&A series from the SPE Methane Emissions Management Technical Section (MEMTS) on methane intelligence and how oil and gas teams translate emissions data into credible decisions and measurable reductions.

Prairie Pumpjack Silhouette in Panorama
Source: Imagine Golf/Getty Images

In this article, Babatunde Anifowose, SPE, chair of the SPE Methane Emissions Management Technical Section, talks with Peter Roos, founder and chief innovation officer at Bridger Photonics, and Mike Thorpe, chief scientist at Bridger, about how data, models, and mindsets are reshaping methane emission quantification. As the first piece in a series on methane intelligence, the conversation explores detection sensitivity, uncertainty, and the challenge of scaling from individual plumes to basinwide inventories, as well as how policy frameworks, market expectations, and operator/technology partnerships can build (or erode) trust and enable “good-enough” data to drive real emissions reductions.

Methane quantification has entered a new phase where detection is no longer the only question. The harder question is whether measurements are trusted (e.g., by the operators who must act, the regulators who must defend decisions, and the stakeholders who expect transparency). In this conversation, we focus on the intersection between data, models, and human judgment, including how performance is characterized, how uncertainty is communicated, and how programs are designed so that results stand up to scrutiny and still drive action.

Trust is earned when methods are transparent, uncertainty is communicated in decision-ready terms including the quantification of uncertainties, and results hold up over time. Just as importantly, trusted measurement has to connect to operations. Finding emissions is only the start; the value comes when measurement triggers response, verification, learning.

MEMTS: When you hear “data, models, and trust,” what does trustworthy data mean in methane quantification?

Thorpe: It means being transparent and explicit—ideally with experimentally validated models—about what a system can and cannot tell you under real-world conditions. Detection sensitivity can vary strongly with environmental and operational factors, and testing only under favorable conditions can misrepresent field performance by large factors. Trust also requires quantification error and bias to be characterized (and corrected) as a function of conditions. Doing that can materially improve inventory accuracy. Finally, if the goal is a measurement-based inventory that can track progress, three basics must hold: sufficient sensitivity to detect the vast majority of emissions (>90%), a characterized infrastructure population with representative sampling, and robust interpretation and accounting models. Otherwise, you risk systematic error from patchwork assumptions.

MEMTS: What evidence standards should emissions technologies meet before their data are used for major operational or regulatory decisions?

Thorpe: Independent, single-blind, controlled-release testing is a strong foundation for characterizing localization, probability of detection (POD), and quantification error. But the key is breadth of testing. Performance should be characterized across practical deployment conditions, not only best-case scenarios. Once characterized, transparent disclosure helps everyone understand what was detectable (and what wasn’t) during a campaign, an essential ingredient for defensibility and for maintaining trust.

MEMTS: Yes, single-blind controlled-release is great. But any thoughts on the double-blind controlled-release approach, which may offer superior bias reduction and more reliable results?

Roos: Agreed, double-blind testing is even better if available from the third party. The gold standard is independent third-party controlled-release testing during actual field operations where the participant doesn’t even know they’re being tested. This prevents opportunities for participants to game the system because they can’t select the best environmental conditions, the ground surface changes from site to site, and the emitter locations are not known. But this type of testing can become impractical at scale.

MEMTS: Also, POD curves and uncertainty models can be hard for nonspecialists to use. How should managers use them?

Roos: Teams should treat POD/uncertainty as inputs to program design and as performance audits if they have that capability. One practical way to make detection performance transparent is to represent a range of sensitivity outcomes (e.g., a histogram of actual detection sensitivities across sites and conditions) rather than just a single number. That makes variability transparent and helps set realistic expectations for what a campaign can capture.

MEMTS: Over the past decade, we’ve seen an explosion of new sensing technologies, platforms, and analytics for methane. In your view, what are the essential ingredients for turning all of this into a measurement system that operators, regulators, and the public can truly trust?

Roos: The essential ingredients have evolved as the community has matured. Early on, operators wanted confident detection and clear localization so ground teams could respond efficiently. More recently, many operators have moved beyond a purely find-and-fix mindset toward equipment-attributed inventories that help them understand drivers, track trends, and plan prevention. Trust comes when these outcomes are paired with comprehensive performance characterization and transparent disclosure so stakeholders can see what the data can support and what it cannot.

MEMTS: What misconceptions do you see about “what gets detected” and “how accurate the numbers are,” especially for intermittent or low-rate sources?

Thorpe: A recurring misconception is the “gold nugget theory” that finding only the very largest emitters will capture the vast majority of emissions. Limited sensitivity can make smaller emissions disappear from view, and limited spatial resolution can merge multiple small sources into an apparently large one, both of which distort conclusions about what matters most. In production settings, superemitting events can also be short-lived (minutes to hours), so “only chase the big ones” can turn into reactive whack-a-mole instead of learning causes and preventing recurrence.

MEMTS: What do you think about the balance between physics-based models and machine learning (ML)/artificial intelligence (AI) in quantification?

Roos: We generally push physical models as far as they can go, then use ML/AI where it clearly adds value, often in detection workflows, consistency, and speed. For quantification, physical understanding remains dominant in many cases. ML/AI approaches are promising, but they still need to demonstrate robust, bias-resistant performance before being relied on for high-stakes reporting.

MEMTS: So, what is the industry currently doing to maximize the potential of ML/AI approaches in methane emissions quantification and detection workflows?

Roos: ML/AI approaches are being aggressively developed by the industry, but this is harder than you might expect. It’s not like language models, which have been developed and improved over many decades. For our industry, new models must be developed and tested for each sensor technology. This includes accumulating and training ML/AI algorithms on massive libraries of “truth” data that we’ve accumulated. The models might start off by making someone 10% or 20% more efficient with an end goal of improving efficiency by >90%.

MEMTS: Uncertainty is closely linked to trust. How can technology providers and operators communicate uncertainty honestly while keeping it actionable?

Thorpe: First, uncertainty bounds need to be experimentally validated. Controlled experiments at scale are the baseline for credibility. Second, the industry needs better translation layers that turn technical uncertainty into decision-ready guidance (for survey design, prioritization, investment choices) rather than leaving uncertainty as an abstract statistic.

MEMTS: What is the current state-of-play with uncertainty quantification and associated errors/unknowns?

Thorpe: Standardization of how to treat uncertainty and bias is an area in which our community can improve. Confidence bounds for stochastic (random) error sources like intermittency can be directly validated by simply repeating the experiment many times. Rooting out bias can be tougher, especially when it is unknown, but can be accomplished. Comprehensive sensor characterization and intercomparing the results of different sensing technologies measuring the same sample set can tell us if our monitoring technologies and measurement protocols are producing repeatable and reliable results.

MEMTS: Scaling from plume detections to basin inventories can be powerful—and risky. What choices matter most when scaling up?

Thorpe: The core choices are about representation and extrapolation. Programs need a defined facility population, representative sampling (often stratified), and interpretation models that account for detection sensitivity and quantification error. Temporal and spatial extrapolation should be explicit, and methods like Monte Carlo resampling can help quantify variability while keeping the North Star on avoiding inventory gaps and systematic errors that erode trust.

MEMTS: What have you learned about sampling frequency and coverage when intermittency is a concern?

Roos: Sampling needs vary, and higher-variance emissions distributions require more sampling to achieve the same uncertainty in totals or intensity. Overcoming intermittency becomes challenging for smaller operators with fewer sites when cadence is too low. Sampling more can address that, but it can also create inequities if requirements don’t scale sensibly. A practical approach is to set uniform sampling frequency expectations to capture known systematic drivers (seasonal, production, diurnal). This approach delivers low uncertainty bounds for medium and large operators and allows for larger uncertainty bounds for smaller operators, rather than forcing them to sample more.

MEMTS: Top-down and bottom-up estimates still disagree. How should teams handle reconciliation without losing momentum?

Thorpe: If random effects are captured with uncertainty bounds (repeat measurements help), persistent disagreements often point to systematic error in technology performance or sampling strategy. That’s why reducing systematic errors like sensitivity limits, quantification bias, and representativeness is so important. And, even when reconciliation isn’t perfect, the industry shouldn’t lose sight of the end goal: Drive reductions not just debates.

MEMTS: Operators are often wary of surprise numbers that differ from their expectations. What good practices have you seen for bringing operators into the process early so that measurement campaigns are seen as a partnership rather than an audit?

Roos: It helps to emphasize that this is a journey—unexpected events will happen—and the goal is continuous improvement. Providing context up front also matters. For example, sharing reasonable, anonymized bounds of what typical outcomes look like in a basin can set expectations, reduce surprises, and build a partnership mindset.

MEMTS: With satellites, aircraft, drones, fixed sensors, optical gas imaging, and even supervisory control and data acquisition (SCADA) signals, how should operators choose a monitoring mix?

Roos: A risk-based approach is practical such that higher-risk or higher-complexity sites often warrant more frequent monitoring, while simpler sites may not. When the chance of a significant event between scans becomes too high, complementary tools (fixed sensors, satellites, SCADA) can fill temporal gaps. Intercomparisons across technologies are also valuable to show what truly works for a given asset.

MEMTS: When different methods disagree, what keeps trust intact?

Thorpe: Treat discrepancies as diagnostic signals. Missed sources and systematic limitations differ by approach. Ground methods can struggle with blind spots or short-lived events, and aerial and satellite approaches often hinge on sensitivity and temporal sampling. Trust is maintained by the willingness to dig in, find root causes, and improve program design and interpretation rather than dismissing measurements outright.

MEMTS: Looking ahead 5–10 years, what is the next frontier? And what skills should young professionals build?

Roos: The frontier is faster feedback loops through more scalable and frequent measurements, more integrated data systems, and more fusion and predictive analytics that highlight abnormal emissions quickly and prioritize response. Methane data are fundamentally geospatial, so GIS [geographic information system] proficiency helps, and communication is critical because this space spans science, operations, technology, and policy.

MEMTS: You made a fantastic statement of fact about methane data being fundamentally geospatial. So, what is your take on the seemingly disproportionate attention to, and action on, methane detection and abatement around the world?

Roos: Technology for methane reduction was adopted earliest in North America due to a combination of tech availability and an eager industry. With those learnings, we are seeing multinational oil and gas companies spread technology solutions abroad. They are establishing emissions reduction as a “cultural norm” akin to safety that is propagating globally.

MEMTS: Basically, there are no atmospheric boundaries between nations, meaning that the impact of methane release isn’t solely local but global. Therefore, the efforts to detect, quantify, manage, and reduce methane in the West remain futile unless other parts of the world (e.g., the global south) up their game. What are your thoughts on this?

Thorpe: Both locally and globally, everyone loses by letting those methane molecules escape into the atmosphere. So, there’s a massive win/win opportunity for keeping that methane in the pipes all the way to the burner tip. That win/win extends from increased revenue to improved safety to reducing greenhouse gases across the globe. We focus on the part that we control: making emissions reduction simple. If we accomplish that, then we increase our positive impact and everyone wins.

MEMTS: Do we need near-perfect data, or “good enough” trusted data that drives reductions now?

Roos: Not perfection—confidence. If expectations are to reduce emissions by ~20% per year, then measurement uncertainty needs to be in that same interval to confidently verify progress. The encouraging part is that tools already exist to support that. The biggest impact now is deploying what works broadly and using it to drive real reductions.

MEMTS: As part of treating uncertainties, can you share some experience of collaborating with stakeholders (i.e., decision-makers and the public) in the communication process toward ensuring social license to operate?

Roos: To help instill trust, we’ve worked with stakeholders including operators, regulators, academics, and the public to advocate for, and present, accurate science paired with fair and balanced messaging. Sensationalism and fearmongering can be powerful marketing tactics but tend to create animosity between stakeholders, noise and confusion in our marketplace, and can lead to irrational and inefficient strategic decisions. We encourage stakeholders to get on the ground, meet the people, take the time to understand the true progress that the oil and gas industry is making to reduce emissions.

MEMTS: Finally, by design and construction, it is understood that multiphase flowmeters (MPFMs) can eliminate a large chunk of methane emissions compared with traditional separation-based facilities. So, why bother with data, models, and trust issues for methane emissions quantification?

Thorpe and Roos: Thoughtful facility design and construction, including advanced tech like MPFMs, is a powerful way to reduce emissions from normal operation processes. But, even with ideal zero-emission components, equipment will still age and components will fail. Periodic verification and tracking of emissions will always be necessary to maintain reductions, especially for traditional infrastructure and fugitive emissions across all sectors of the industry.