AI is in Danger of Becoming too Male – New Research

AI is in Danger of Becoming too Male – New Research

Artificial Intelligence (AI) systems are becoming smarter every day, beating world champions in games like Goidentifying tumours in medical scans better than human radiologists, and increasing the efficiency of electricity-hungry data centres. Some economists are comparing the transformative potential of AI with other “general-purpose technologies” such as the steam engine, electricity or the transistor.

But current AI systems are far from perfect. They tend to reflect the biases of the data used to train them and to break down when they face unexpected situations. They can be gamed, as we have seen with the controversies surrounding misinformation on social media, violent content posted on YouTube, or the famous case of Tay, the Microsoft chatbot, which was manipulated into making racist and sexist statements within hours.

So do we really want to turn these bias-prone, brittle technologies into the foundation stones of tomorrow’s economy?

Minimising risk

One way to minimise AI risks is to increase the diversity of the teams involved in their development. As research on collective decision-making and creativity suggests, groups that are more cognitively diverse tend to make better decisions. Unfortunately, this is a far cry from the situation in the community currently developing AI systems. And a lack of gender diversity is one important (although not the only) dimension of this.

A review published by the AI Now Institute earlier this year, showed that less than 20% of the researchers applying to prestigious AI conferences are women, and that only a quarter of undergraduates studying AI at Stanford and the University of California at Berkeley are female.

The authors argued that this lack of gender diversity results in AI failures that uniquely affect women, such as an Amazon recruitment system that was shown to discriminate against job applicants with female names.

Our recent report, Gender Diversity in AI research, involved a “big data” analysis of 1.5m papers in arXiv, a pre-prints website widely used by the AI community to disseminate its work.

We analysed the text of abstracts to determine which apply AI techniques, inferred the gender of the authors from their names and studied the levels of gender diversity in AI and its evolution over time. We also compared the situation in different research fields and countries, and differences in language between papers with female co-authors and all-male papers.

Our analysis confirms the idea that there is a gender diversity crisis in AI research. Only 13.8% of AI authors in arXiv are women and, in relative terms, the proportion of AI papers co-authored by at least one woman has not improved since the 1990s.

To find out more go to: https://techfinancials.co.za/2019/08/19/ai-is-in-danger-of-becoming-too-male-new-research/