by Calisa Cole

Last summer, we interviewed Ben Jiang, CEO of startup Nexiwave, who talked about his company’s work in making it possible to search for a spoken word as easily as we Google a written word or phrase.

We caught up recently with Ben and learned that Nexiwave has just entered into a partnership with UbiCast, a webcast hosting and technology company based in France. UbiCast will be the first company to offer “deep audio search” as a standard feature to all of its customers. See today’s press release here, which talks about how UbiCast customers produce large amounts of high-value content, but finding and retrieving archived information has been a challenge. Nexiwave’s technology, which is accelerated by NVIDIA GPUs, will help UbiCast make broad-scale audio search cost-justifiable for the first time ever.

Nexiwave, as Ben explained to us, is focused on “speech indexing” – which is computationally intensive and has traditionally been very expensive. Fortunately, speech indexing can be efficiently processed in parallel and the GPU is a perfect fit for it. Through the use of GPUs and CUDA, Ben’s team has seen a ~70X speedup over previous solutions (which translates into cost savings for users).

Nexiwave offers its technology via software licenses, as well as SaaS (software as a service) and cloud computing. An early user of the Amazon Elastic Compute Cloud (EC2) environment, Nexiwave plans to utilize the newly-announced Amazon EC2 Cluster GPU Instances service.

Nexiwave can potentially change the way we interact with large volumes of archived presentations and speeches. For example, imagine being able to search for very specific key words like “photography” or “stereoscopic” in hours and hours of recorded audio from conferences, college courses or seminars. With Nexiwave’s technology, it will be possible to quickly and easily rediscover the presenters’ actual spoken words.