Linguists estimate at least half of the world’s estimated 7,000 spoken languages will become extinct by the century’s end due to forces ranging from globalization to cultural assimilation.
Part of the challenge of documenting and revitalizing endangered languages is a lack of texts and speech recordings to work with. Seneca, a language of one of the six Iroquois Nations in North America, has only about 100 first-language speakers and several hundred more second-language learners.
Automatic speech recognition (ASR) technology is widely used to transcribe languages with millions or billions of speakers, like English and Mandarin. But it has only scratched the surface with languages like Seneca, which have vastly fewer speakers and significantly less data to work with.
Now a team of researchers at the Rochester Institute of Technology in New York, along with colleagues from the University at Buffalo, is tapping deep learning to bolster the ability of ASR. And while its focus is on Seneca, the project’s vision encompasses the preservation of languages globally as well as an important part of our shared cultural history.
“Knowing about different languages teaches us a lot about how our brain works,” said Emily Prud’hommeaux, an assistant professor of computer science at Boston College and a research faculty member at RIT. “When you document a language, you’re preserving information not only about that language but also about how humans use language in general.”
It’s no coincidence that Prud’hommeaux and her team started with the Seneca language. Three members of the Seneca nation are part of the effort – a direct connection that’s rare in research of this type, she said.
Leading the charge is Robbie Jimerson, a Ph.D. student in RIT’s Golisano College of Computing and Information Science. He’s a member of the Seneca Nation of Indians and is passionate about ensuring the survival of the Seneca language.
“There’s a big effort by the leaders of the tribe to preserve and promote our language,” said Jimerson. “I was looking for an opportunity to contribute.”
Using GANs to Create More Language Samples
Now in its third year, the project has had challenges when it comes to accumulating language data. Jimerson said the Seneca community can be guarded about what it shares with other people. So there wasn’t an abundance of recordings of the language being spoken. He set out to change that.
He started by recording friends and elders who speak the language and asking them to record their friends. He found out whenever someone was speaking Seneca in public. He asked for family recordings of elders telling stories handed down from previous generations. And he grabbed any publicly available videos or recordings he could find online.
The team has fine-tuned an ASR model for Seneca, running it through generative adversarial networks to create more samples out of the limited number of recordings. The model turns wave files of the spoken language into streams of characters, while computing probability and making corrections.
The resulting data is fed into a deep learning model that in turn expands upon the ASR model’s accuracy.
The team’s networks run in two compute settings: on a nine-server machine learning lab running a variety of NVIDIA Tesla GPUs, and on a university cluster of large servers, each running 10 NVIDIA Tesla P4 GPUs. Each cluster runs a range of deep learning frameworks such as TensorFlow and Caffe.
“The computer engineering cluster is for all students in the computer engineering department, and so they have to ‘compete’ for these resources,” said Ray Ptucha, assistant professor of computer engineering at RIT, another collaborator on this project.
With access to these clusters at a premium, Jimerson tests code and checks the stability of models on a local machine running an NVIDIA TITAN X rather than inconvenience other students by running a model that might crash.
Achieving Better Accuracy
So far, the team’s efforts have brought the word error rate of its ASR model from 70 percent down to 56 percent. The goal, said Prud’hommeaux, is to get that rate down to 25 percent, which is where ASR systems were in processing English several years ago.
The more samples of spoken and written Seneca the team can accumulate, the more the error rate will decrease. (Today, English ASR models can achieve word error rates as low as 5 percent.)
The team’s work is expected to help with language preservation efforts around the world.
Prud’hommeaux said the team has an agreement with an archiving institution that’s a condition of a grant the project received from the National Science Foundation. The resulting language archiving database will be made available as a resource for other efforts seeking to document threatened languages.
Additionally, Prud’hommeaux said the team’s work could prove helpful for any deep learning effort that has to make do with limited amounts of data.
Read more about the team’s work in their research papers here and here.
Feature image: The Haudenosaunee (Iroquois Confederacy) flag, via Wikimedia Commons.