Getting a grip on data

Most workers don’t make anything real anymore – they make data. Daniel Brown, of Common Data Access, ponders whether the oil and gas industry can figure out how to turn data into profit.

Images from iStock.

Data is the catalyst of the modern world. For the newest companies – Uber, AirBnB – that’s all they have. They don’t own plant and machinery. They just take one person’s data, transform it, and sell the result to you for profit. For the past 10 years, other industries have been trying to learn from the same playbook. How to maximize the speed with which data moves through their organizations. How to design every step of the work to avoid waste, and extract the maximum value for them, and their shareholders.

These businesses also analyze their data in the whole, rather than piece by piece, and in doing so are learning about how their companies actually work, what their customers actually buy (rather than say they want), and are dealing far more effectively with statistical risks, such as fraud and credit control. Ten years ago, all this was new. Today, for a business of any size, it is part of daily life. As Ford’s ex-CEO, Alan Mulally often said: “Facts and data set you free.”

Until the oil price crash, none of this mattered much in our industry. Margins were healthy, and technology investments were all about science and engineering solutions for new, harder to develop prospects, rather than business process investments in operational efficiency. While the world was riding the analytics wave, oil and gas was doing well enough to not need to pay attention.

Today, every dollar counts. Every marginal dollar saved in lifting costs is critical in staving off cessation of production. Advances in oilfield technology deliver ever more economical solutions to the engineering challenges posed by aging infrastructure. But, it is the transformed business practices you get by being smart about data that offer savings in every other part of the exploration and production value chain – from prospect identification, all the way through to keeping a lid on decommissioning costs.

Good data management and good business process are two sides of the same coin. Do data well, and workers have the information they need at their fingertips for their piece of the work – and pass that data on to the next person in the line, so they have the same. Good data organizations take care of their “sources of truth,” and don’t waste time finding things they should already have, or shuffling things around in spreadsheets because what they’re given isn’t quite right.

Data preparation tools like Alteryx and KNIME, together with data visualization tools like Qlik, D3, and Tableau are eradicating spreadsheets from the workplace for essentially every routine task, and eliminating huge amounts of wasteful effort, all for a more reliable and consistent result, and way shorter cycle times.

Data engineering tools are also making headway. For 40 years, reporting to government of oilfield activities took place in paper form – so archives of those reports tend to be scanned images of the paper, rather than crisp, modern digital documents ready for analysis. Thanks to advances in image and character recognition tools, these scanned images can now be routinely read and analyzed, unlocking the knowledge of decades of exploration and development activity for modern day explorers to benefit from (OE: February 2017).

The next data wave?

So, will oil and gas be riding the next wave in the data science revolution? Machine learning has now reached the peak of Gartner’s “hype cycle” for new technologies. Can this bring the same benefits to oil and gas that it must in other industries?

Machine learning at its core is a collection of pattern matching algorithms that are just very, very, very fast. The technology has been around for decades, but it is only with recent changes in computer architecture – particularly the use of powerful graphics chips for the calculations involved – that it has become affordable for everyone to use.

Machine learning systems can be trained to spot patterns in any kind of data, from images and movies to real-time data from plant and machinery. Once trained, they can work at a speed, scale, and reliability that humans will never match.

Hand-crafted learning systems have been in use in oil and gas for years to help humans avoid particularly expensive, or dangerous mistakes. BP’s Well Advisor system, co-developed with Kongsberg (OE: September 2016), is credited with essentially eliminating cases of stuck pipe; and critical rotating equipment such as gas turbines are routinely monitored by their manufacturers to optimize maintenance schedules, and avoid outages.

More general learning systems are now starting to make a difference in other areas. As part of Common Data Access’ (CDA) 2016 Data Challenge, Schlumberger presented a geological model building workflow that, starting from raw log curves and cuttings information, compressed nine weeks of petrophysical labor into nine hours of computer time. That’s quite a saving.

In data we trust?

While current machine learning techniques are good at the jobs they’re trained to do, they are not yet the perfect digital assistant for today’s subsurface professional. They are only as good as the data they’re trained on (and training can take months to complete); and once trained, they still can’t explain why they make the decisions they make. The computer can’t yet explain how it can identify a cat in a photograph. It’s not the ears, the tail, and the fur that makes the cat – just some fast mathematics that outputs a probability, with no insight into the “why.”

For learning systems that we can truly trust, an explanation is needed. If a computer is to pick prospects from an automated analysis of seismic trace data, it would be helpful to know why one prospect is more worthy of investigation than other. What kind of trap is it? Are there geological indicators to suggest a seal rock may be present?

Thankfully, this next wave of machine learning is of huge interest to the rest of industry as well. From shopping habits to autonomous vehicles, enormous investments are being made to address these, and the other challenges, and oil and gas can benefit from the results – if we work together to get after them.

CDA is industry’s custodian of the UK’s offshore subsurface data. We have the data part of data science covered. But we need help to cover the rest. Data science is a team sport – and it is only when data, business, and technology are well matched that we can make real progress.

Daniel Brown has been working in and around data and information for over 20 years – in the physical sciences, for one of the original internet search engines, and for BP, leading their global data services organization. Today, Daniel looks after the project portfolio of CDA, the world’s largest repository of UK subsurface well and seismic data, and is leading their work to build industry capability in data analytics.

Current News

ABL Gets Neptun Deep Job for OMV Petrom in Black Sea

ABL Gets Neptun Deep Job for O

Petrobras and China’s CNCEC to Collaborate on Oil and Gas, Renewables

Petrobras and China’s CNCEC to

Norway Clears TGS and PGS Merger

Norway Clears TGS and PGS Merg

Full Capacity Operations at Tyra II Gas Development Up for Potential Delays

Full Capacity Operations at Ty

Subscribe for OE Digital E‑News

Offshore Engineer Magazine