You are using an older browser version. Please use a supported version for the best MSN experience.

Amazon scraps 'sexist AI' recruiting tool that showed bias against women

The Telegraph logo The Telegraph 11/10/2018 James Cook

(Provided by: Reuters)

Amazon has scrapped a "sexist" internal tool that used artificial intelligence to sort through job applications.

The AI was created by a team at Amazon's Edinburgh office in 2014 as a way to automatically sort through CVs and pick out the most promising candidates. 

However, it quickly taught itself to prefer male candidates over female ones, according to members of the team who spoke to Reuters. 

They noticed that it was penalising CVs that included the word "women's," such as "women's chess club captain." It also reportedly downgraded graduates of two all-women's colleges. 

Download the all-new Microsoft News app to receive up-to-the minute news from the world’s best sources – available now on iOS and Android

The problem stemmed from the fact that the system was trained on data submitted by people over a 10-year period, most of which came from men.

Jeff Bezos, Amazon founder and CEO © AP Jeff Bezos, Amazon founder and CEO The AI was tweaked in an attempt to fix the bias. However, last year, Amazon lost faith in its ability to be neutral and abandoned the project altogether. 

Amazon recruiters are believed to have used the system to look at the recommendations when hiring, but didn't rely on the rankings. Currently, women make up 40pc of Amazon's workforce.

Stevie Buckley, the co-founder of UK job website Honest Work, which is used by companies such as Snapchat to recruit for technology roles, said that “the basic premise of expecting a machine to identify strong job applicants based on historic hiring practices at your company is a surefire method to rapidly scale inherent bias and discriminatory recruitment practices.”

The danger of inherent bias in the use of algorithms is a common problem in the technology industry. Algorithms are not told to be biased, but can become unfair through the data they use.

Amazon logo © Getty Amazon logo Jessica Rose, a technical manager at education start-up FutureLearn and technology speaker, said that "the value of AI as it's used in recruitment today is limited by human bias."

"Developers and AI specialists carry the same biases as talent professionals, but we're often not asked to interrogate or test for these during the development process," she said.

Google had to remove the ability to search for photos of gorillas in its Google Photos app after the service began to suggest that photographs of people of colour were actually photographs of gorillas.

Amazon’s failed recruitment software and the issues with Google Photos illustrate one of the largest weaknesses of machine learning, where computers teach themselves to perform tasks by analysing data.

Gallery: 24 facts you may not know about Amazon founder Jeff Bezos (Photos)

Last month, IBM launched a tool which is designed to detect bias in AI. The Fairness 360 Kit allows developers to see clearly how their algorithms work and which pieces of data are used to make decisions.

“Considering Amazon's exhaustive resources and their exceptionally talented team of engineers,” Mr Buckley said, “the fact that their AI recruiting tool failed miserably suggests that we should maintain a default scepticism towards any organisation that claims to have produced an effective AI tool for recruitment.”


More from The Telegraph

image beaconimage beaconimage beacon