Total and Google Cloud have signed an agreement to jointly develop artificial intelligence (AI) solutions applied to subsurface data analysis for oil and gas exploration and production. The agreement focuses on the development of AI programs that will make it possible to interpret subsurface images, notably from seismic studies (using Computer Vision technology) and automate the analysis of technical documents (using Natural Language Processing technology). These programs will allow Total’s geologists, geophysicists, reservoir and geo-information engineers to explore and assess oil and gas fields faster and more effectively. Under this partnership, Total geoscientists will work side-by-side with Google Cloud’s machine learning experts within the same project team based in Google Cloud’s Advanced Solutions Lab in California.
Total started applying artificial intelligence to characterize oil and gas fields using machine learning algorithms in the 1990s. In 2013, the company used machine learning algorithms to implement predictive maintenance for turbines, pumps and compressors at its industrial facilities, thus generating savings of several hundred million dollars. Today, Total teams are exploring multiple machine learning and deep learning applications such as production profile forecasting, automated analysis of satellite images or analysis of rock sample images.
Industry-Wide Computing Power
In January this year, Eni launched its new HPC4 supercomputer which quadruples the company’s computing power and makes it what it says is the world’s most powerful industrial system. According to the latest official Top 500 supercomputers list published last November (the next list is due to be published in June 2018), Eni’s HPC4 is the only non-governmental and non-institutional system ranking among the top 10 most powerful systems in the world.
Eni’s supercomputers (the HPC3 and the new HPC4) provide strategic support to the company’s process of digital transformation across the entire value chain, from the exploration and development phase of oil and gas reservoirs, to the management of the big data generated in the operational phase by all productive assets (upstream, refining and petrochemicals). In particular, HPC4 now supports the execution and evolution of Eni’s suite of 3D seismic imaging packages, as well as advanced petroleum system modeling and reservoir simulation algorithms and optimization of production plants.
HPC4 has a peak performance of 18.6 Petaflops which, combined with the supercomputing system already in operation (HPC3), increases Eni’s computational peak capacity to 22.4 Petaflops. The new hybrid HPC cluster provided by Hewlett Packard Enterprise is built on 1600 HPE ProLiant DL380 nodes, each equipped with two Intel 24-core Skylake processors (totaling more than 76,000 cores) and two NVIDIA Tesla P100 GPU accelerators, all connected through a high-speed EDR InfiniBand. The new system will be working alongside a high performance 15 Petabytes storage subsystem.
Late last year, BP more than doubled the total computing power of its Center for High-Performance Computing in Houston, making it one of the most powerful supercomputers in the world for commercial research. Increased computing power, speed and storage reduce the time needed to analyze large amounts of seismic data to support exploration, appraisal and development plans as well as other research and technology developments throughout BP.
No comments:
Post a Comment