When the Earth Talks, AI Listens

Scientists repurpose speech recognition AI to decode seismic activity, uncovering patterns that could one day help predict earthquakes.
by Brian Caulfield

AI built for speech is now decoding the language of earthquakes.

A team of researchers from the Earth and environmental sciences division at Los Alamos National Laboratory repurposed Meta’s Wav2Vec-2.0, an AI model designed for speech recognition, to analyze seismic signals from Hawaii’s 2018 Kīlauea volcano collapse.

Their findings, published in Nature Communications, suggest that faults emit distinct signals as they shift — patterns that AI can now track in real time. While this doesn’t mean AI can predict earthquakes, the study marks an important step toward understanding how faults behave before a slip event.

“Seismic records are acoustic measurements of waves passing through the solid Earth. From a signal processing perspective, many similar techniques are applied for both audio and seismic waveform analysis.”

— Christopher Johnson, Los Alamos National Laboratory

The stakes

In the past five years, earthquakes in Japan, Turkey and California have caused tens of billions of dollars in damage and displaced millions of people. That’s where AI comes in.

Led by Johnson, along with Kun Wang and Paul Johnson, the Los Alamos team tested whether speech-recognition AI could make sense of fault movements — deciphering the tremors like words in a sentence.

To test their approach, the team used data from the dramatic 2018 collapse of Hawaii’s Kīlauea caldera, which triggered a series of earthquakes over three months. The AI analyzed seismic waveforms and mapped them to real-time ground movement, revealing that faults might “speak” in patterns resembling human speech.

Speech recognition models like Wav2Vec-2.0 are well-suited for this task because they excel at identifying complex, time-series data patterns — whether involving human speech or the Earth’s tremors. The AI model outperformed traditional methods such as gradient-boosted trees, which struggle with the unpredictable, continuous nature of seismic signals.

Lava lake in Halemaʻumaʻu crater at Kīlauea volcano
The AI model was tested using data from the 2018 collapse of Hawaii’s Kīlauea caldera, which triggered months of earthquakes and reshaped the volcanic landscape. The lava lake in Halemaʻumaʻu during the 2020–2021 eruption is a striking reminder of Kīlauea’s ongoing activity. USGS / F. Trusdell

How AI Was Trained to Listen to the Earth

The approach
01
Self-supervised learning
No manually labeled training data needed — the model learned directly from continuous seismic waveforms.
02
NVIDIA GPU acceleration
High-performance NVIDIA GPUs processed vast amounts of waveform data in parallel, accelerating training.
03
Real-world fine-tuning
The model was fine-tuned on Kīlauea’s 2018 collapse sequence — three months of continuous earthquake data.

What’s Still Missing

While the AI showed promise in tracking real-time fault shifts, it was less effective at forecasting future displacement. Attempts to train the model for near-future predictions — essentially, asking it to anticipate a slip event before it happens — yielded inconclusive results.

In the researchers’ own words

“We need to expand the training data to include continuous data from other seismic networks that contain more variations in naturally occurring and anthropogenic signals.”

“This research, as applied to tectonic fault systems, is still in its infancy… Extending these efforts to real-world forecasting will require further model development with physics-based constraints.”

— Christopher Johnson, Los Alamos National Laboratory

Despite the challenges in forecasting, the results mark an intriguing advancement in earthquake research. This study suggests that AI models designed for speech recognition may be uniquely suited to interpreting the intricate, shifting signals faults generate over time.

The bottom line

Speech-based AI isn’t predicting earthquakes yet.
But this research suggests it could one day — if scientists can teach it to listen more carefully.

Read the Full Paper

“`