Buolamwini ran tests on software created by big tech firms, including Amazon, whose Rekognition facial recognition program is sold to law enforcement agencies, a move the ACLU has dubbed “dangerous.” Buolamwini’s tests revealed that Amazon’s system badly misidentified darker-hued women, with much higher error rates in classifying the gender of darker-skinned women than for lighter-skinned men.
Buolamwini alerted the companies to her findings and was stunned by their reaction. “I didn’t know their reaction would be quite so hostile,” Buolamwini told the AP in an interview.
In a blog post, Amazon said that Buolamwini’s study made “erroneous claims” and confused facial analysis with facial recognition:
“The answer to anxieties over new technology is not to run ‘tests’ inconsistent with how the service is designed to be used, and to amplify the test’s false and misleading conclusions through the news media. We are eager to continue to work with researchers, academics, and customers, to continuously improve as we evolve this important technology.”
While Amazon may have been displeased by her findings, a group of AI scholars, including a winner of computer science’s top prize, defended her work and called on Amazon to stop selling its facial recognition software to police. A group of Amazon employees and shareholders have also asked the company to cease selling Rekognition, based partly on Buolamwini’s research. After protests by Amazon’s lawyers, the U.S. Securities and Exchange Commission told the company that it could not prevent an investor vote on the issue at its annual meeting in May.
Facial recognition is becoming more widespread, from airport security and hotel check-in to video games, identifying Civil War era soldiers, and public events and Taylor Swift concerts and a lot more. The accuracy of facial recognition can also be a matter of life or death, whether being used by police, countries, schools, or when self-driving cars can’t “see” pedestrians with darker skin tones.