A growing number of corporations are using Artificial Intelligence (AI) to automate processes and crunch information with speed and efficiency far exceeding human capabilities. But for all its revolutionary properties, AI has not escaped criticism. The underlying algorithms that govern AI behavior have been accused of bias, including in high-profile cases involving Amazon, Facebook, and more.

At a time where tech’s most prominent companies are enduring across-the-board government scrutiny, algorithmic bias is firmly on lawmakers’ radar. A new bill – the Algorithmic Accountability Act – was introduced by Senators Cory Booker (D-NJ), Ron Wyden (D-OR), and Representative Yvette Clarke (D-NY) in April 2019 to address concerns with the technology and how it is used.

A Briefing on the Algorithmic Accountability Bill

The Algorithmic Accountability Act targets companies in possession of data on over 1 million customers or consumer devices—or companies that operate as data brokers—with $50 million annual gross revenue. The Act seeks to “direct the Federal Trade Commission (FTC) to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments” on both old and new systems.

The bill is intentionally broad, with tech news outlet The Verge describing it as “[seemingly] designed to cover countless other controversial AI tools – as well as the training data that can produce biased outcomes in the first place.” Impact assessments would examine algorithms and training data for “accuracy, fairness, bias, discrimination, privacy, and security,” then require companies to address the issues that surface. Companies would also be required to examine the implications of their information systems on “the privacy and security of consumers’ personal information” – a timely addition in the era of data as currency.

Artificial Intelligence Bias in Action

The press release announcing the Algorithmic Accountability Act cites two recent examples of computer algorithms creating biased and discriminatory results. The first involves Facebook, which several fair housing groups accused of violating the Fair Housing Act. The Fair Housing Act – in effect since 1968 – offers protections against discrimination when purchasing or renting a home, securing a mortgage, and “engaging in other housing-related activities.” Facebook allegedly promised it would crack down on advertisers who use “[targeting] tools” to show housing or employment ads to whites only, but landlords and real estate brokers have still been able to use Facebook’s algorithms to prevent families with children, women and others from receiving rental and sales ads for housing.

The second cited case of AI-driven bias involved an Amazon-developed tool using machine learning to comb through resumes. The company, which has leveraged automation across their business to great effect, found the “new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way,” reported Reuters. Using resumes submitted to the company over a 10-year period as training data, the algorithms reflected an industry-wide gender gap, “in effect… [teaching] itself that male candidates were preferable.” Despite neutralizing certain terms that enforced the bias, Amazon could not “guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory.” Similar discrepancies involving women and people of color were called into question by researchers examining Amazon’s facial recognition technology – another example of the way AI reflects the data used to train it.

What’s Next?

The Algorithmic Accountability Act remains in the early stages of the legislative process, but has garnered support from several tech and civil rights groups, including Data for Black Lives, the Center on Privacy and Technology at Georgetown Law and the National Hispanic Media Coalition, according to the press release.

The non-profit, non-partisan Center for Data Innovation, however, was less pleased with the bill in its current state. They argue that the bill should expand its scope “to all high-risk decisions, regardless of the technology involved,” that it should enlarge its parameters to include more companies, and that it should make impact assessments public, among other suggestions. They warn the legislation as written would create “overreaching regulations” that “protect consumers against many potential algorithmic harms while also inhibiting benign and beneficial applications of algorithms.”

In a world where algorithms are fixtures of business – and by extension, peoples’ lives – the issue of biased training data is increasingly consequential. The Algorithmic Accountability Act takes a first stab at addressing those problems. While it may not be a cure-all for algorithmic bias, it does represent recognition from lawmakers about the power and prevalence of machine learning and artificial intelligence tools, as well as their ability to impact society in very real ways.