Conference Content

Learn more about who, what, when, and where.
2 of 4
Next

In & Out of the Hub

This is your portal to access more content in the Virtual Hub.
1 of 4
Next

Live Content

On these streams you will find live content happening during the conference.
4 of 4
Close

Connect with others

These spaces are built specifically for you to connect with others at the conference.
3 of 4
Next
← Back to Agenda
Paper
Paper
Paper
March
10
14:00
-
15:45
UTC
Add to Calendar 3/10/21 14:00 3/10/21 15:45 UTC Paper Session 24 Check out this session on the FAccT Hub. https://2021.facctconference.org/conference-agenda/session-24
Track Two

Paper Session 24

Session Chair:
Moderator:
Discussant:
Vinodkumar Prabhakaran
No items found.

Abstract

Ask a Question?
Join the Conversation

Avoiding Disparity Amplification under Different Worldviews

Samuel Yeom, Michael Carl Tschantz
View Paper

Abstract

We mathematically compare four competing definitions of group-level nondiscrimination: demographic parity, equalized odds, predictive parity, and calibration. Using the theoretical framework of Friedler et al., we study the properties of each definition under various worldviews, which are assumptions about how, if at all, the observed data is biased. We argue that different worldviews call for different definitions of fairness, and we specify the worldviews that, when combined with the desire to avoid a criterion for discrimination that we call disparity amplification, motivate demographic parity and equalized odds. We also argue that predictive parity and calibration are insufficient for avoiding disparity amplification because predictive parity allows an arbitrarily large inter-group disparity and calibration is not robust to post-processing. Finally, we define a worldview that is more realistic than the previously considered ones, and we introduce a new notion of fairness that corresponds to this worldview.

“You Can’t Sit With Us”: Exclusionary Pedagogy in AI Ethics Education

Inioluwa Deborah Raji, Morgan Klaus Scheuerman, Razvan Amironesei
View Paper

Abstract

Given a growing concern about the lack of ethical consideration in the Artificial Intelligence (AI) field, many have begun to question how dominant approaches to the disciplinary education of computer science (CS)—and its implications for AI—has led to the current “ethics crisis”. However, we claim that the current AI ethics education space relies on a form of “exclusionary pedagogy,” where ethics is distilled for computational approaches, but there is no deeper epistemological engagement with other ways of knowing that would benefit ethical thinking or an acknowledgement of the limitations of uni-vocal computational thinking. This results in indifference, devaluation, and a lack of mutual support between CS and humanistic social science (HSS), elevating the myth of technologists as "ethical unicorns" that can do it all, though their disciplinary tools are ultimately limited. Through an analysis of computer science education literature and a review of college-level course syllabi in AI ethics, we discuss the limitations of the epistemological assumptions and hierarchies of knowledge which dictate current attempts at including ethics education in CS training and explore evidence for the practical mechanisms through which this exclusion occurs. We then propose a shift towards a substantively collaborative, holistic, and ethically generative pedagogy in AI education.

"I agree with the decision, but they didn't deserve this": Future Developers' Perception of Fairness in Algorithmic Decisions

M. Kasinidou, S. Kleanthous, P. Barlas, J. Otterbacher
View Paper

Abstract

While professionals are increasingly relying on algorithmic systems for making a decision, on some occasions, algorithmic decisions may be perceived as biased or not just. Prior work has looked into the perception of algorithmic decision-making from the user’s point of view. In this work, we investigate how students in fields adjacent to algorithm development perceive algorithmic decision-making. Participants (N=99) were asked to rate their agreement with statements regarding six constructs that are related to facets of fairness and justice in algorithmic decision-making in three separate scenarios. Two of the three scenarios were independent of each other, while the third scenario presented three different oUCTomes of the same algorithmic system, demonstrating perception changes triggered by different outputs. Quantitative analysis indicates that 𝑎) ‘agreeing’ with a decision does not mean the person ‘deserves the oUCTome’, 𝑏) perceiving the factors used in the decision-making as ‘appropriate’ does not make the decision of the system ‘fair’ and 𝑐) perceiving a system’s decision as ‘not fair’ is affecting the participants’ ‘trust’ in the system. In addition, participants found proportional distribution of benefits more fair than other approaches. Qualitative analysis provides further insights into that information the participants find essential to judge and understand an algorithmic decision-making system’s fairness. Finally, the level of academic education has a role to play in the perception of fairness and justice in algorithmic decision-making.

Live Q&A Recording

This live session has not been uploaded yet. Check back soon or check out the live session.