You are using an older browser version. Please use a supported version for the best MSN experience.

World's best facial recognition AI systems STILL struggle to tell black people apart - particularly women

Daily Mail logo Daily Mail 7/23/2019 Tim Collins For Mailonline

a group of people posing for a photo: Experts from the government agency found up to a tenfold difference in error rate when it came to correctly identifying black women compared to white females. White men were found to present the least challenge for correct identification (file photo of an example dataset) © Provided by Associated Newspapers Limited Experts from the government agency found up to a tenfold difference in error rate when it came to correctly identifying black women compared to white females. White men were found to present the least challenge for correct identification (file photo of an example dataset) Evidence continues to mount that facial recognition systems - some of which are already deployed by police forces worldwide - struggle to tell black people apart.

Research conducted by the National Institute of Standards and Technology (NIST) in the US tested AI software from more than 50 companies across the globe.

Experts from the government agency found up to a tenfold difference in error rate when it came to correctly identifying black women compared to white females.

White men were found to present the least challenge when it came to correct identification.

The finding builds on previous studies that have also noted serious discrepancies in facial recognition tools when it comes to images of people with darker skin tones.

A number of factors - including the mix of races used in photographs that train the systems and even the use of makeup by women - have been blamed for past errors.

a close up of a map: Evidence continues to mount that facial recognition systems - some of which are already deployed by police forces worldwide - struggle to tell black people apart. This graph shows test results for various software. An upper red line indicates false matches of black women © Provided by Associated Newspapers Limited Evidence continues to mount that facial recognition systems - some of which are already deployed by police forces worldwide - struggle to tell black people apart. This graph shows test results for various software. An upper red line indicates false matches of black women

Researchers from the NIST - based in Gaithersburg, Maryland - tested the ability of software in the trial to verify that two different images showed the same face.

French firm Idemia's software is already used by police forces in the US, Australia, and France, as well as border agencies at American cruise terminals, according to reports in Wired.

They found two of Idemia’s most recent tools were significantly more likely to falsely match black women's faces than those of white women, black men,  or white men. 

At settings where Idemia’s algorithms gave a false match rate (FMR) of one in 10,000 for white women, the software had an FMR of one in 1,000 for black women - ten times more often.

Writing in the report, its authors said: 'White males ... is the demographic that usually gives the lowest FMR.

'Black females ... is the demographic that usually gives the highest FMR.'

Idemia says that the latest AI tested has not yet been released commercially and that these issues will be ironed out with further development.

The company also claims that engineers at NIST, whose tests are seen as a gold standard for facial recognition accuracy monitoring, were likely pushing their software to its limits.

'There are physical differences in people and the algorithms are going to improve on different people at different rates,' Donnie Scott, US public security division lead at Idemia, based in Paris, told Wired.

This is not the first time that the issue of racial bias in facial recognition software has been raised.

A March 2018 study called 'Gender Shades', conducted by the MIT Media Lab, discovered that popular facial recognition services from Microsoft, IBM and Face++ vary in accuracy based on gender and race.

In June of that year, Microsoft announced it had updated it's facial recognition technology in an attempt to tackle the MIT study's findings.

In January 2019, MIT researchers announced a potential solution to the problem, in the form of a new algorithm that knows to scan for faces, but also evaluates the training data supplied to it. 

The algorithm scans for biases in the training data and eliminates any that it perceives, resulting in a more balanced dataset.

The full findings of the latest study were published in a paper by NIST.

 Read more
AdChoices
AdChoices

More From Daily Mail

image beaconimage beaconimage beacon