How Deep Learning Is Dispelling the Clouds Hanging Over Climate Models

by Isha Salian

Global climate projections don’t always agree on how much the climate will warm in coming decades. Some project temperature rises of three or more degrees Celsius by the year 2100. Others predict a 1.5 degree shift.

The primary reason for this variation may be surprising: clouds.

“Most of the uncertainties we have — not all, but most — really tie back to how we represent clouds in those models,” says Pierre Gentine, an associate professor in Columbia University’s earth and environmental engineering department.

To resolve this haziness in climate projections, Gentine and his fellow researchers came up with a deep learning model.

How Do You Catch a Cloud and Pin It Down?

The Paris Agreement, drafted by climate scientists and world leaders in 2015, set a two-degree limit for global temperature rise this century compared to pre-industrial levels. Cross that tipping point, and expect to see more droughts, sea level rise and extinction risks for many animal species.

The two-degrees figure so often mentioned is a global average, including temperature rise above the oceans — where humans don’t live, Gentine points out. It’s also an average across many different climate models.

That’s because models have to approximate some features of the environment. It’s an estimate, and every computing group does it a bit differently, leading to variations among projections.

Climate models look like the globe with a grid laid over it. Each cell on the grid represents a surface area of between 50 to 100 square kilometers. This resolution isn’t fine enough to account for smaller features like clouds, which can measure just a few hundred meters across, as noted in a paper Gentine co-authored.

Scientists rely on a mathematical approximation, or parameterization, of what average cloud cover will look like in a region and what clouds will do to moisten or heat the atmosphere.

Depending on how the researcher represents the physics of clouds, they could end up very bright in the model, or else greyer and darker. This makes a big difference as brighter clouds reflect more heat back into the atmosphere and contribute less to warming.

A model that projects three degrees of global temperature rise may assume less or darker cloud cover than a model estimating one degree of warming.

More accurate cloud-resolving models use a much finer resolution, but they’re bulky. They require too much computational power for researchers to use them for global climate models that look further than a couple years into the future.

The Silver Lining: Deep Learning

To solve this problem, the researchers picked a cloud-resolving model with a high resolution of one square mile, including data on all kinds of clouds.

They then trained a neural network on six months of this granular data, and plugged the deep learning solution into an existing climate model — replacing its previous approximation of cloud effects.

Shifting to a neural network for representing clouds improved the climate model’s performance, Gentine says. In addition to temperature estimates, it provided better predictions for precipitation extremes, which are unclear in most current models.

An added benefit: the neural network is eight times cheaper than the original climate model.

The researchers use an assortment of NVIDIA GPUs, including Tesla P100 and K80 accelerators. While they worked with TensorFlow software library at first, they later had to convert the code to Fortran, the language used by typical large-scale climate models.

Gentine says the goal is to improve climate forecasts’ ability to predict regional climate impact with more accuracy across the globe. These detailed views are essential for mitigation efforts and infrastructure planning.

“When we say a global temperature increase of two degrees, that doesn’t mean in Texas you won’t get four,” he said. “We critically need such accurate local estimates.”