AI built for speech is now decoding the language of earthquakes.
A team of researchers from the Earth and environmental sciences division at Los Alamos National Laboratory repurposed Meta’s Wav2Vec-2.0, an AI model designed for speech recognition, to analyze seismic signals from Hawaii’s 2018 Kīlauea volcano collapse.
Their findings, published in Nature Communications, suggest that faults emit distinct signals as they shift — patterns that AI can now track in real time. While this doesn’t mean AI can predict earthquakes, the study marks an important step toward understanding how faults behave before a slip event.
“Seismic records are acoustic measurements of waves passing through the solid Earth. From a signal processing perspective, many similar techniques are applied for both audio and seismic waveform analysis.”
In the past five years, earthquakes in Japan, Turkey and California have caused tens of billions of dollars in damage and displaced millions of people. That’s where AI comes in.
Led by Johnson, along with Kun Wang and Paul Johnson, the Los Alamos team tested whether speech-recognition AI could make sense of fault movements — deciphering the tremors like words in a sentence.
To test their approach, the team used data from the dramatic 2018 collapse of Hawaii’s Kīlauea caldera, which triggered a series of earthquakes over three months. The AI analyzed seismic waveforms and mapped them to real-time ground movement, revealing that faults might “speak” in patterns resembling human speech.
Speech recognition models like Wav2Vec-2.0 are well-suited for this task because they excel at identifying complex, time-series data patterns — whether involving human speech or the Earth’s tremors. The AI model outperformed traditional methods such as gradient-boosted trees, which struggle with the unpredictable, continuous nature of seismic signals.

While the AI showed promise in tracking real-time fault shifts, it was less effective at forecasting future displacement. Attempts to train the model for near-future predictions — essentially, asking it to anticipate a slip event before it happens — yielded inconclusive results.
“We need to expand the training data to include continuous data from other seismic networks that contain more variations in naturally occurring and anthropogenic signals.”
“This research, as applied to tectonic fault systems, is still in its infancy… Extending these efforts to real-world forecasting will require further model development with physics-based constraints.”
Despite the challenges in forecasting, the results mark an intriguing advancement in earthquake research. This study suggests that AI models designed for speech recognition may be uniquely suited to interpreting the intricate, shifting signals faults generate over time.
Speech-based AI isn’t predicting earthquakes yet.
But this research suggests it could one day — if scientists can teach it to listen more carefully.
“`


