Who knew AI would become such a wordsmith. But not long ago, Spence Green and John DeNero were perplexed that the latest and greatest natural language processing research wasn’t yet in use by professional translators.
The Stanford grads set out to change that. In 2015, they co-founded Lilt, an AI software platform for translation and localization services.
Applications for natural language processing have exploded in the past decade as advances in recurrent neural networks powered by GPUs have offered better performing AI. That’s enabled startups to offer the likes of white-label voice services, language tutors and chatbots.
Lilt’s AI software platform was developed for translation experts to use on localization projects and train networks as they work. The hybrid human-machine platform is set up to boost the speed of translation and the domain expertise for specific projects.
Lilt software acts like the Google auto-complete feature for filling in search queries. The software allows users to review each line of text in one language and translate it into another, but it gives entire lines of translation suggestions that translators can accept, reject or alter.
As translators interact with the text, it helps train the neural networks and is immediately put into the software. “Every one of our users has a different set of parameters that is trained for them,” said DeNero.
Lilt software — available for more than 30 languages — can improve the speed of translation projects by as much as five times, said DeNero.
Lilt is a member of NVIDIA Inception, a virtual accelerator program that helps startups get to market faster. Customers of Lilt include Canva, Zendesk and Hudson’s Bay Company.
NLP on Transformer
What sets Lilt apart in its approach to natural language processing, according to its founders, is its deployment of services built on next-generation deep neural networks. Lilt harnesses an alternative to RNNs known as the Transformer neural network architecture, a model developed from research (Attention Is All You Need) at Google Brain in December 2017.
Transformer architecture differs from the sequential nature of RNNs, which give more weight to the last words in a sentence to determine the next. Instead, in each step, it applies what’s known as a self-attention technique that determines the next word based on a comparison score with all of the words in a sentence.
This newer method is considered ideal for language understanding. The architecture enables more parallelization, providing higher levels of translation quality, according to the paper’s authors.
NVIDIA GPUs recently set AI performance records, including for training the Transformer neural network in just 6.2 minutes.
Fast, Personalized Translation
The architecture enables a fast and personalized software platform for translators. This is important for Lilt because it is computationally demanding to have many different customized user profiles that are working on and training the software at the same time.
Lilt performs translation interactions while people are typing, so they have to happen quickly — under 300 milliseconds, said DeNero. This means Lilt’s service has to maintain some neural networks that perform static functions and others that need to be adapted live.
“We need GPUs in the cloud because we are training the system as they are working,” DeNero said.