It’s easy to think of AI as cold, unbiased, objective. Not quite, suggests Narrative Science Chief Scientist Kris Hammond explains, because we never know when AI will repeat our biases back to us.
“Just as our biases creep into how we talk to, we train, we teach our children, they creep into the way we talk to, train, and teach our AI systems,” says Hammond, also a professor of Computer Science and Journalism at Northwestern University and Director of the Computer Science Plus X program.
Narrative Science uses artificial intelligence to turn data into stories that help people better understand the world around them. Its natural language generation platform, Quill, has generated headlines by literally generating headlines: automating the production of client investment summaries and internal performance reports, among other tasks, for clients like USAA, Fannie Mae and Deloitte.
Bias in AI: Examples Proliferating
That makes questions of bias more than just a matter of academic interest for Hammond. It’s a challenge not only in training AI in tasks — like judging beauty — that are hard to quantify to begin with, but tasks that would, seem, to some, less influenced by our biases, such as assessing creditworthiness.
“We would like the artificial intelligence systems that we build to be cold, emotionless, oddly enough so that we can make fun of them, because they’re not as clever and good and creative as we are,“ Hammond says, during a wide-ranging conversation with our podcast’s host, Michael Copeland. “The reality we build them, we train them, we sometimes give them the reasoning rules we’re going to use, and there is absolutely no way to avoid having all of our notions about how the world works creep into these systems.
Can AI Free Us From Our Prejudices?
The solution isn’t just to look for our own biases when training AIs, but to understand our own limitations, and train AIs to help us all see past them. “As human beings we’re a collection of vaguely serviceable heuristics and a complete misunderstanding of statistics,” Hammond says. “And having machines help us — because there are people who understand who we are and how we are and how we think — and actually design those machines to really cater to the best of us, that actually is absolutely doable.”
Fast, Furious and Frugal
And if you missed Episode 6 of the AI Podcast it’s worth a listen: Jim Burke, a graphic artist and founder of the Power Racing Series spoke about how hackers are taking brains, a few hundred bones and a pink Barbie jeep to create an autonomous racing league that’s fast, furious, frugal.
Featured image: Valerie Everett, via Flickr.