Conference Content

Learn more about who, what, when, and where.
2 of 4
Next

In & Out of the Hub

This is your portal to access more content in the Virtual Hub.
1 of 4
Next

Live Content

On these streams you will find live content happening during the conference.
4 of 4
Close

Connect with others

These spaces are built specifically for you to connect with others at the conference.
3 of 4
Next
โ† Back to Agenda
Paper
Paper
Paper
March
8
โ€“
12:00
-
13:45
UTC
Add to Calendar 3/8/21 12:00 3/8/21 13:45 UTC Paper Session 1 Check out this session on the FAccT Hub. https://2021.facctconference.org/conference-agenda/session-1
Track One

Paper Session 1

Session Chair:
Moderator:
Discussant:
Shalmali Joshi
No items found.

Abstract

Ask a Question?
Join the Conversation

Mitigating Bias in Set Selection with Noisy Protected Attributes

Anay Mehrotra, L. Elisa Celis
View Paper

Abstract

Subset selection algorithms are ubiquitous in AI-driven applications, including, online recruiting portals and image search engines, so it is imperative that these tools are not discriminatory on the basis of protected attributes such as gender or race. Currently, fair subset selection algorithms assume that the protected attributes are known as part of the dataset. However, protected attributes may be noisy due to errors during data collection or if they are imputed (as is often the case in real-world settings). While a wide body of work addresses the effect of noise on the performance of machine learning algorithms, its effect on fairness remains largely unexamined. We find that in the presence of noisy protected attributes, in attempting to increase fairness without considering noise, one can, in fact, decrease the fairness of the result! Towards addressing this, we consider an existing noise model in which there is probabilistic information about the protected attributes (e.g., [19, 32, 44, 56]), and ask is fair selection possible under noisy conditions? We formulate a ``denoised'' selection problem which functions for a large class of fairness metrics; given the desired fairness goal, the solution to the denoised problem violates the goal by at most a small multiplicative amount with high probability. Although this denoised problem turns out to be NP-hard, we give a linear-programming based approximation algorithm for it. We evaluate this approach on both synthetic and real-world datasets. Our empirical results show that this approach can produce subsets which significantly improve the fairness metrics despite the presence of noisy protected attributes, and, compared to prior noise-oblivious approaches, has better Pareto-tradeoffs between utility and fairness.

Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases

Ryan Steed, Aylin Caliskan
View Paper

Abstract

Recent advances in machine learning leverage massive datasets of unlabeled images from the web to learn general-purpose image representations for tasks from image classification to face recognition. But do unsupervised computer vision models automatically learn implicit patterns and embed social biases that could have harmful downstream effects? We develop a novel method for quantifying biased associations between representations of social concepts and attributes in images. We find that state-of-the-art unsupervised models trained on ImageNet, a popular benchmark image dataset curated from internet images, automatically learn racial, gender, and intersectional biases. We replicate 8 previously documented human biases from social psychology, from the innocuous, as with insects and flowers, to the potentially harmful, as with race and gender. Our results closely match three hypotheses about intersectional bias from social psychology. For the first time in unsupervised computer vision, we also quantify implicit human biases about weight, disabilities, and several ethnicities. When compared with statistical patterns in online image datasets, our findings suggest that machine learning models can automatically learn bias from the way people are stereotypically portrayed on the web.

Group Fairness: Independence Revisited

Tim Rรคz
View Paper

Abstract

This paper critically examines arguments against independence, a measure of group fairness also known as statistical parity and as demographic parity. In recent discussions of fairness in computer science, some have maintained that independence is not a suitable measure of group fairness. This position is at least partially based on two influential papers (Dwork et al., 2012, Hardt et al., 2016) that provide arguments against independence. We revisit these arguments, and we find that the case against independence is rather weak. We also give arguments in favor of independence, showing that it plays a distinctive role in considerations of fairness. Finally, we discuss how to balance different fairness considerations.

Live Q&A Recording

This live session has not been uploaded yet. Check back soon or check out the live session.