HAI 2019 Fall Conference: The Coded Gaze: Dangers of Supremely White Data and Ignoring Intersectionality, with Joy Buolamwini

Oct. 28, 2019, 10:44 p.m.

[ubergrid id=1159436]

“AI, Ain’t I a woman?”

Tech ethics researcher Joy Buolamwini opened her talk with a 21st-century twist on a quote from Sojourner Truth, one of the most powerful advocates of abolitionist movement. 

She then displayed a video showing facial recognition software failing to correctly identify the gender of notable African-American women, like Ida B. Wells, Shirley Chisholm and Michelle Obama. 

In her talk, Buolamwini shed light on biases embedded in the algorithms of facial recognition technology. She conducts research at the MIT Media Lab and founded the Algorithmic Justice League (AJL), a collective of artists, researchers and activists that aim to identify and mitigate algorithmic bias in facial-recognition software. Buolamwini previously testified before Congress in May on technology’s impact on civil rights and liberties.

Facial recognition alogrithm’s “single-axis analysis” that uses only two labels — male and female — to categorize humans, “does not give us a full picture,” Buolamwini said. When fed data sets in which the samples are disproportionately lighter-skinned, the software showed poor performances in identifying the faces of people with darker skin, especially female samples.

Over the past few years, various civil society groups have raised concerns about facial-recognition services developed by big tech companies. In 2015, Google apologized after its new Photos application mistakenly identified two black individuals as “gorillas.” The company again faced backlash after it was accused of hiring subcontractors who targeted homeless people and college students with darker skin as test subjects to improve their facial recognition technology.

Bias in AI-powered facial recognition reflects “the priorities, preferences and prejudices that shape technology,” Buolamwini said. 

With her team at MIT, Buolamwini audited facial recognition demos of major American and Chinese tech companies, including Google, Microsoft and Face++. The research showed that the services performed best in identifying white male faces but failed to correctly identify black female faces, with one demo even scoring below 50% accuracy.

Buolamwini argued that the “patriarchy” and “white supremacy” in technology can create a “power shadow” that disadvantages marginalized groups. Without adequate scrutiny over the private and public sector’s application of the technology in hiring and policing, the unchecked biases of the AI will bring even more dire consequences, Buolamwini said.

“Intersectionality matters,” because “aggregate statistics can mask important differences,” Buolamwini continued. She said that “aggregate statistics” based on “supremely white data” results in “poor representation of the undersampled majority.” 

To prevent this, she addressed the importance of using more diversified categories and datasets. Her team created the Pilot Parliaments Benchmark (PPB) that complements two existing benchmarks used in face recognition software. The new benchmark significantly improved the performances of the demos in identifying the faces of people of color and women, she said.

When talking about how to move toward “algorithmic justice,” Buolamwini underscored the importance of establishing a robust funding structure to support research done in the public interest. 

“Business interests are just not always in align with ethical AI,”  Buolamwini said. “This is not a surprise.”

Buloamwini proposed introducing an “algorithmic accountability tax” that will require companies to allocate a small percentage of their research and development budget to a separate fund. This money would be used to finance public interest research that could monitor commercial activities that use “AI systems in robust enough ways that could impact society.”

She stressed the importance of putting public interest research like hers into action, citing Safe Face Pledge, part of the AJL’s watchdog efforts. According to its website, the pledge is designed “to make public commitments toward mitigating the abuse of facial analysis technology.” These kinds of safety catches can prevent corporations or government agencies from abusing facial recognition technology, she said.

Contact Won Gi Jung at jwongi ‘at’ stanford.edu, Max Hampel at mhampel ‘at’ stanford.edu, Marianne Lu at mlu23 ‘at’ stanford.edu, Daniel Yang at danieljhyang ‘at’ stanford.edu and Trishiet Ray at trishiet ‘at’ stanford.edu.

Daniel Yang is a staff writer interested in studying History. Contact him at danieljy 'at' stanford.edu.

Login or create an account

Apply to The Daily’s High School Summer Program

deadline EXTENDED TO april 28!

Days
Hours
Minutes
Seconds