Pod Squad: Descript Uses AI to Make Managing Podcasts Quicker, Easier

by Lauren Finkle

You can’t have an AI podcast and not interview someone using AI to make podcasts better.

That’s why we reached out to serial entrepreneur Andrew Mason to talk to him about what he’s doing now. His company, Descript Podcast Studio, uses AI, natural language processing and automatic speech synthesis to make podcast editing easier and more collaborative.

Mason, Descript’s CEO and perhaps best known as Groupon’s founder, spoke with AI Podcast host Noah Kravitz about his company and the newest beta service it offers, called Overdub.

Key Points From This Episode

  • Descript works like a collaborative word processor. Users record audio, which Descript converts to text. They can then edit and rearrange text, and the program will change the audio.
  • Overdub, created in collaboration with Descript’s AI research division, eliminates the need to re-record audio. Type in new text, and Overdub creates audio in the user’s voice.
  • Descript 3.0 launched in November, adding new features such as a detector that can identify and remove vocalized pauses like “um” and “uh” as well as silence.

Tweetables

“We’re trying to use AI to automate the technical heavy lifting components of learning to use editors — as opposed to automating the craft — and we leave space for the user to display and refine their craft” — Andrew Mason [07:10]

“What’s really unique to us is a kind of tonal or prosodic connecting of the dots, where we’ll analyze the audio before and after whatever you’re splicing in with Overdub, and make sure that it sounds continuous in a natural transition” — Andrew Mason [10:30]

You Might Also Like

The Next Hans Zimmer? How AI May Create Music for Video Games, Exercise Routines

Imagine Wolfgang Amadeus Mozart as an algorithm or the next Hans Zimmer as a computer. Pierre Barreau and his startup, Aiva Technologies, are using deep learning to compose music. Their algorithm can create a theme in four minutes flat.

How Deep Learning Can Translate American Sign Language

Rochester Institute of Technology computer engineering major Syed Ahmed, a research assistant at the National Technical Institute for the Deaf, uses AI to translate between American sign language and English. Ahmed trained his algorithm on 1,700 sign language videos.

Tune in to the AI Podcast

Get the AI Podcast through iTunesGoogle PodcastsGoogle PlayCastbox, DoggCatcher, OvercastPlayerFM, Pocket Casts, PodbayPodBean, PodCruncher, PodKicker, SoundcloudSpotifyStitcher and TuneIn.

   

Make Our Podcast Better

Have a few minutes to spare? Fill out this short listener survey. Your answers will help us make a better podcast.