Artificial Intelligence technology is transforming previously unfathomable ideas into concrete realities on a daily basis. One flagship application is facial recognition technology, which has been embraced by law enforcement for identifying suspects – a development that has its detractors but remains touted for its crime prevention potential.

For all its potential, however, this technology may have some concerning side effects. A recent MIT report found bias in a popular, Amazon-developed facial recognition system, leading “at least 25 prominent artificial-intelligence researchers, including experts at Google, Facebook, Microsoft, and a recent winner of the prestigious Turing Award” to call for its removal from sale. Naturally, Amazon disagrees.

The back-and-forth underscores the potentially wide-reaching consequences of facial recognition technology as it becomes more widely adopted, and the subject merits a closer look.

The MIT Study

The study, published in January by MIT’s Media Lab, found “that Amazon’s system, Rekognition, had much more difficulty in telling the gender of female faces and of darker-skinned faces in photos than similar services from IBM and Microsoft.”

The peer-reviewed report, which was co-authored by MIT’s Joy Buolamwini and Inioluwa Deborah Raji, found evidence of systemic bias in the Rekognition technology, which according to The New York Times “had more trouble identifying the gender of female and darker-skinned faces in photos than similar services from IBM and Microsoft…[mistaking] women for men 19 percent of the time…and [misidentifying] darker-skinned women for men 31 percent of the time.”

That “all classifiers performed best for lighter individuals and males overall” while “[performing] worst for darker females” was perhaps less surprising when considering the methodology used to train facial recognition algorithms. The MIT report acknowledges that the labeled data used to train facial recognition algorithms has been found in previous research to be “[biased]…[resulting] in algorithmic discrimination.”

This problem is not unique to facial recognition technology – while unintentional, algorithmic bias tends to reflect the biases built into the societies developing them. As stakes grow higher and potential use cases more fraught, manufacturers are under pressure to make changes to improve accuracy while defending themselves from charges of bias.

Amazon’s Response

Two Amazon executives, Dr. Matthew Wood and Michael Punke, wrote separate rebukes of the study and subsequent New York Times report. Wood posited that the research “[ran] ‘tests’ inconsistent with how the service is designed to be used, and to amplify the test’s false and misleading conclusions through the news media,” among other claims; Punke acknowledged the potential for abuses of facial recognition technology and affirmed the company’s support of both regulation and transparency on the part of law enforcement regarding how they use it, but was also insistent that “new technology should not be banned or condemned because of its potential misuse.” Instead, he advocated for “open, honest, and earnest dialogue among all parties involved to ensure that the technology is applied appropriately and is continuously enhanced.”

Amazon claims not to have learned of any instances of impropriety from law enforcement about their use of Rekognition, noting that its policy disallows using the technology in illegal ways. Additionally, a spokeswoman responded to the New York Times report to clarify it had updated the service post-study and “that it had found no differences in error rates by gender and race when running similar tests.”

Researcher Skepticism

The New York Times report, however, indicates the company “has declined to disclose how police or intelligence agencies are using its Rekognition system and whether the company puts any restrictions on its use,” further fueling concerns about the technology. In the open letter, researchers took umbrage at what they characterized as a “[misrepresentation] of the technical details for the work and the state-of-the-art in facial analysis and face recognition” in Wood and Punke’s blog posts, issuing a four point rebuttal affirming the validity of the study.

The first point is that bias in one system is a cause of concern in others because of the ability to “severely impact people’s lives.” The second is that the original study was “conducted within the context of Rekognition’s use” using the “publicly available” API, with consideration given to societal context for its use, and the regulations and standards enacted at the time of study. Third, they verified that the data “has been replicated by many companies based on the details provided in the paper.” Last, they expressed apprehension at the lack of laws or requirements “to ensure that Rekognition is used in a manner that does not infringe on civil liberties.”

What Happens Now?

Researchers believe the severity of these issues warrants a ban on selling Rekognition technology to law enforcement; study authors insist “further work is needed to see if the substantial error rate gaps on the basis of gender, skin type and intersectional subgroup revealed in this study of gender classification persist in other human-based computer vision tasks.” Both stances, as well as Amazon’s acknowledged support of regulatory legislation, seem to point towards assembling some type of supervisory framework – though it remains unclear what that would look like.

Another tech giant may offer some hints. Microsoft backed a February bill in Washington State “that would require notices to be posted in public places using facial-recognition tech and ensure that government agencies obtain a court order when looking for specific people” – stronger protections than nothing, but not nearly as comprehensive as other legislation they neglected to back. A middle ground may be attainable, but what that middle ground looks like is still murky.

What is clear is that facial recognition technology is not going away. It may be nascent days for its widespread use, but it has improved to the point where it has real-world applications and those applications have potentially serious consequences. Regulatory legislation – and continued changes to address issues of bias – seem to be the path forward.