When the Earth Talks, AI Listens

by Brian Caulfield

AI built for speech is now decoding the language of earthquakes.

A team of researchers from the Earth and environmental sciences division at Los Alamos National Laboratory repurposed Meta’s Wav2Vec-2.0, an AI model designed for speech recognition, to analyze seismic signals from Hawaii’s 2018 Kīlauea volcano collapse.

Their findings, published in Nature Communications, suggest that faults emit distinct signals as they shift — patterns that AI can now track in real time. While this doesn’t mean AI can predict earthquakes, the study marks an important step toward understanding how faults behave before a slip event.

“Seismic records are acoustic measurements of waves passing through the solid Earth,” said Christopher Johnson, one of the study’s lead researchers. “From a signal processing perspective, many similar techniques are applied for both audio and seismic waveform analysis.”

The AI model was tested using data from the 2018 collapse of Hawaii’s Kīlauea caldera, which triggered months of earthquakes and reshaped the volcanic landscape. The lava lake in Halemaʻumaʻu during the 2020-2021 eruption (USGS/F. Trusdell) is a striking reminder of Kīlauea’s ongoing activity.

Big earthquakes don’t just shake the ground — they upend economies. In the past five years, quakes in Japan, Turkey and California have caused tens of billions of dollars in damage and displaced millions of people.

That’s where AI comes in. Led by Johnson, along with Kun Wang and Paul Johnson, the Los Alamos team tested whether speech-recognition AI could make sense of fault movements — deciphering the tremors like words in a sentence.

To test their approach, the team used data from the dramatic 2018 collapse of Hawaii’s Kīlauea caldera, which triggered a series of earthquakes over three months.

The AI analyzed seismic waveforms and mapped them to real-time ground movement, revealing that faults might “speak” in patterns resembling human speech.

Speech recognition models like Wav2Vec-2.0 are well-suited for this task because they excel at identifying complex, time-series data patterns — whether involving human speech or the Earth’s tremors.

The AI model outperformed traditional methods, such as gradient-boosted trees, which struggle with the unpredictable nature of seismic signals. Gradient-boosted trees build multiple decision trees in sequence, refining predictions by correcting previous errors at each step.

However, these models struggle with highly variable, continuous signals like seismic waveforms. In contrast, deep learning models like Wav2Vec-2.0 excel at identifying underlying patterns.

How AI Was Trained to Listen to the Earth

Unlike previous machine learning models that required manually labeled training data, the researchers used a self-supervised learning approach to train Wav2Vec-2.0. The model was pretrained on continuous seismic waveforms and then fine-tuned using real-world data from Kīlauea’s collapse sequence.

NVIDIA accelerated computing played a crucial role in processing vast amounts of seismic waveform data in parallel. High-performance NVIDIA GPUs accelerated training, enabling the AI to efficiently extract meaningful patterns from continuous seismic signals.

What’s Still Missing: Can AI Predict Earthquakes?

While the AI showed promise in tracking real-time fault shifts, it was less effective at forecasting future displacement. Attempts to train the model for near-future predictions — essentially, asking it to anticipate a slip event before it happens — yielded inconclusive results.

“We need to expand the training data to include continuous data from other seismic networks that contain more variations in naturally occurring and anthropogenic signals,” he explained.

A Step Toward Smarter Seismic Monitoring

Despite the challenges in forecasting, the results mark an intriguing advancement in earthquake research. This study suggests that AI models designed for speech recognition may be uniquely suited to interpreting the intricate, shifting signals faults generate over time.

“This research, as applied to tectonic fault systems, is still in its infancy,” Johnson. “The study is more analogous to data from laboratory experiments than large earthquake fault zones, which have much longer recurrence intervals. Extending these efforts to real-world forecasting will require further model development with physics-based constraints.”

So, no, speech-based AI models aren’t predicting earthquakes yet. But this research suggests they could one day — if scientists can teach it to listen more carefully.

Read the full paper, “Automatic Speech Recognition Predicts Contemporaneous Earthquake Fault Displacement,” to dive deeper into the science behind this groundbreaking research.