Conference Content

Learn more about who, what, when, and where.
2 of 4
Next

In & Out of the Hub

This is your portal to access more content in the Virtual Hub.
1 of 4
Next

Live Content

On these streams you will find live content happening during the conference.
4 of 4
Close

Connect with others

These spaces are built specifically for you to connect with others at the conference.
3 of 4
Next
โ† Back to Agenda
Paper
Paper
Paper
March
9
โ€“
14:00
-
15:50
UTC
Add to Calendar 03/09/2021 02:00 PM 03/09/2021 03:50 PM UTC Paper Session 14 Check out this session on the FAccT Hub. https://2021.facctconference.org/conference-agenda/session-14
Track Two

Paper Session 14

Session Chair:
Moderator:
Discussant:
Daniel Neill
No items found.

Abstract

Ask a Question?
Join the Conversation

Fairness Violations and Mitigation under Covariate Shift

Harvineet Singh, Rina Singh, Vishwali Mhasawade, Rumi Chunara
View Paper

Abstract

We study the problem of learning fair prediction models for unseen test sets distributed differently from the train set. Stability against changes in data distribution is an important mandate for responsible deployment of models. The domain adaptation literature addresses this concern, albeit with the notion of stability limited to that of prediction accuracy. We identify sufficient conditions under which stable models, both in terms of prediction accuracy and fairness, can be learned. Using the causal graph describing the data and the anticipated shifts, we specify an approach based on feature selection that exploits conditional independencies in the data to estimate accuracy and fairness metrics for the test set. We show that for specific fairness definitions, the resulting model satisfies a form of worst-case optimality. In context of a healthcare task, we illustrate the advantages of the approach in making more equitable decisions.

Removing Spurious Features can Hurt Accuracy and Affect Groups Disproportionately

Fereshte Khani, Percy Liang
View Paper

Abstract

The presence of spurious features interferes with the goal of obtaining robust models that perform well across many groups within the population. A natural remedy is to remove spurious features from the model. However, in this work we show that removal of spurious features can decrease accuracy due to the inductive biases of overparameterized models. We completely characterize how the removal of spurious features affects accuracy across different groups (more generally, test distributions) in noiseless overparameterized linear regression. In addition, we show that removal of spurious feature can decrease the accuracy even in balanced datasets -- each target co-occurs equally with each spurious feature; and it can inadvertently make the model more susceptible to other spurious features. Finally, we show that robust self-training can remove spurious features without affecting the overall accuracy. Experiments on the Toxic-Comment-Detection and CelebA datasets show that our results hold in non-linear models.

Epistemic Values in Feature Importance Methods: Lessons From Feminist Epistemology

Leif Hancox-Li, I. Elizabeth Kumar
View Paper

Abstract

As the public seeks greater accountability and transparency from machine learning algorithms, the research literature on methods to explain algorithms and their outputs has rapidly expanded. Feature importance methods form a popular class of explanation methods. In this paper, we apply the lens of feminist epistemology to recent feature importance research. We investigate what epistemic values are implicitly embedded in feature importance methods and how or whether they are in conflict with feminist epistemology. We offer some suggestions on how to conduct research on explanations that respects feminist epistemic values, taking into account the importance of social context, the epistemic privileges of subjugated knowers, and adopting more interactional ways of knowing.

Representativeness in Statistics, Politics, and Machine Learning

Kyla Chasalow, Karen Levy
View Paper

Abstract

Representativeness is a foundational yet slippery concept. Though familiar at first blush, it lacks a single precise meaning. Instead, meanings range from typical or characteristic, to a proportionate match between sample and population, to a more general sense of accuracy, generalizability, coverage, or inclusiveness. Moreover, the concept has long been contested. In statistics, debates about the merits and methods of selecting a representative sample date back to the late 19th century; in politics, debates about the value of likeness as a logic of political representation are older still. Today, as the concept crops up in the study of fairness and accountability in machine learning, we need to carefully consider the term's meanings in order to communicate clearly and account for their normative implications. In this paper, we ask what representativeness means, how it is mobilized socially, and what values and ideals it communicates or confronts. We trace the conceptโ€™s history in statistics and discuss normative tensions concerning its relationship to likeness, exclusion, authority, and aspiration. We draw on these analyses to think through how representativeness is used in FAccT debates, with emphasis on data, shift, participation, and power.

Live Q&A Recording

This live session has not been uploaded yet. Check back soon or check out the live session.