Here’s why we need more diversity in AI

Here’s why we need more diversity in AI

What happens when the training data that feeds our artificial intelligence is limited or flawed? We get biased products and services.

Imran Chaudhri, who created the iPhone’s user interfaces and interactions, illustrated this point bluntly onstage at the Fast Company European Innovation Festival in Milan today. “Siri never worked for me,” the accented British-American designer acknowledged, “and we worked on it.”

Chaudhri, who spent more than 20 years at Apple before cofounding the still-in-stealth startup Humane, was speaking on a panel about the pursuit of inclusive AI with Michael Jones, the senior director of product at Salesforce AI Research. As evidence of AI’s susceptibility to bias increases, the pair agreed that having diverse data sets is essential to creating automated systems that transcend—rather than replicate—the flaws of the real world.

“We have an implicit bias in society today,” Chaudhri said. “And because so much of [AI] is a mimicry of our world, [computers] inherit the same problems of our world.”

As Jones explained, if all your training data for an automatic speech recognition service comes from white men from the Midwest, you’re going to alienate anyone who speaks English as a second language or with an accent. And if you develop a hiring tool to look for résumés that resemble those of your current, mostly male C-suite executives, you’ll end up with a system that automatically penalizes someone who cites her achievements in a “women’s chess club”—an apparent reference to an internal test AI at Amazon that reportedly weeded out female job candidates before it was abandoned in 2015.

Jones and Chaudhri agreed on the importance of hiring diverse and multidisciplinary design and engineering teams to ensure that companies have people who can think through the potential consequences of what they’re building.

“You have to design [AI] knowing that it’s imperfect, that it’s in a very early stage right now,” says Chaudhri. “What that means is that the training [for an AI system] is very similar to the training we use as humans. The parallel is you training a child: The ignorance that a child has is very similar to the ignorance that a computer has, and a computer will take in a lot of [your] biases.”

To find out more go to: