Conference Content

Learn more about who, what, when, and where.
2 of 4
Next

In & Out of the Hub

This is your portal to access more content in the Virtual Hub.
1 of 4
Next

Live Content

On these streams you will find live content happening during the conference.
4 of 4
Close

Connect with others

These spaces are built specifically for you to connect with others at the conference.
3 of 4
Next
← Back to Agenda
Paper
Paper
Paper
March
9
–
20:00
-
21:45
UTC
Add to Calendar 3/9/21 20:00 3/9/21 21:45 UTC Paper Session 15 Check out this session on the FAccT Hub. https://2021.facctconference.org/conference-agenda/session-15
Track One

Paper Session 15

Session Chair:
Moderator:
Discussant:
Jade Abbott
No items found.

Abstract

Ask a Question?
Join the Conversation

Differential Tweetment: Mitigating Racial Dialect Bias in Harmful Tweet Detection

Ari Ball-Burack, Michelle Seng Ah Lee, Jennifer Cobbe, Jatinder Singh
View Paper

Abstract

Automated systems for detecting harmful social media content are afflicted by a variety of biases, some of which originate in their training datasets. In particular, some systems have been shown to propagate racial dialect bias: they systematically classify content aligned with the African American English (AAE) dialect as harmful at a higher rate than content aligned with White English (WE). This perpetuates prejudice by silencing the Black community. Towards this problem we adapt and apply two existing bias mitigation approaches: preferential sampling pre-processing and adversarial debiasing in-processing. We analyse the impact of our interventions on model performance and propagated bias. We find that when bias mitigation is employed, a high degree of predictive accuracy is maintained relative to baseline, and in many cases bias against AAE in harmful tweet predictions is reduced. However, the specific effects of these interventions on bias and performance vary widely between dataset contexts. This variation suggests the unpredictability of autonomous harmful content detection outside of its development context. We argue that this, and the low performance of these systems at baseline, raise questions about the reliability and role of such systems in high-impact, real-world settings.

Spoken Corpora Data, Automatic Speech Recognition, and Bias Against African American Language: The Case of Habitual ‘Be’

Joshua Martin
View Paper

Abstract

Recent work has revealed that major automatic speech recognition (ASR) systems such as Apple, Amazon, Google, IBM, and Microsoft perform much more poorly for Black U.S. speakers than for white U.S. speakers. Researchers postulate that this may be a result of biased datasets which are largely racially homogeneous. However, while the study of ASR performance with regards to the intersection of racial identity and language use is slowly gaining traction within AI, machine learning, and algorithmic bias research, little to nothing has been done to examine the data drawn from the spoken corpora which are commonly used in the training and evaluation of ASRs in order to understand whether or not they are actually biased, this study seeks to begin addressing this gap in the research by investigating spoken corpora used for ASR training and evaluation for a grammatical linguistic feature of what the field of linguistics terms African American Language (AAL), a systematic, rule-governed, and legitimate linguistic variety spoken by many (but not all) African Americans in the U.S. This grammatical feature, habitual 'be', is an uninflected form of 'be' that encodes the characteristic of habituality, as in "I be in my office by 7:30am", paraphrasable as “I am usually in my office by 7:30” in Standardized American English. This study utilizes established corpus linguistics methods on the transcribed data of four major spoken corpora -- Switchboard, Fisher, TIMIT, and LibriSpeech -- to understand the frequency, distribution, and usage of habitual 'be' within each corpus as compared to a reference corpus of spoken AAL -- the Corpus of Regional African American Language (CORAAL). The results find that habitual 'be' appears far less frequently, is dispersed in far fewer transcribed texts, and is surrounded by a much less diverse set of word types and parts of speech in the four ASR corpora as compared with CORAAL. This work provides foundational evidence that spoken corpora used in the training and evaluation of widely used ASR systems are, in fact, biased against AAL and likely contribute to poorer ASR performance for Black users.

Towards Cross-Lingual Generalization of Translation Gender Bias

Won Ik Cho, Jiwon Kim, Jaeyoung Yang, Nam Soo Kim
View Paper

Abstract

Cross-lingual generalization issues for less explored languages have been broadly tackled in recent NLP studies. In this study, we apply the philosophy on the problem of translation gender bias, which necessarily involves multilingualism and socio-cultural diversity. Beyond the conventional evaluation criteria for the social bias, we aim to put together various aspects of linguistic viewpoints into the measuring process, to create a template that makes evaluation less tilted to specific types of language pairs. With a manually constructed set of content words and template, we check both the accuracy of gender inference and the fluency of translation, for German, Korean, Portuguese, and Tagalog. Inference accuracy and disparate impact, namely the biasedness factors associated with each other, show that the failure of bias mitigation threatens the delicacy of translation. Furthermore, our analyses on each system and language indicate that the translation fluency and inference accuracy are not necessarily correlated. The results implicitly suggest that the amount of available language resources that boost up the performance might amplify the bias cross-linguistically.

Live Q&A Recording

This live session has not been uploaded yet. Check back soon or check out the live session.