Conference Content

Learn more about who, what, when, and where.
2 of 4
Next

In & Out of the Hub

This is your portal to access more content in the Virtual Hub.
1 of 4
Next

Live Content

On these streams you will find live content happening during the conference.
4 of 4
Close

Connect with others

These spaces are built specifically for you to connect with others at the conference.
3 of 4
Next
← Back to Agenda
Paper
Paper
Paper
March
9
–
12:00
-
13:50
UTC
Add to Calendar 03/09/2021 12:00 PM 03/09/2021 01:50 PM UTC Paper Session 11 Check out this session on the FAccT Hub. https://2021.facctconference.org/conference-agenda/session-11
Track One

Paper Session 11

Session Chair:
Moderator:
Discussant:
Jade Abbott
No items found.

Abstract

Ask a Question?
Join the Conversation

Narratives and Counternarratives on Data Sharing in Africa

Rediet Abebe, Kehinde Aruleba, Abeba Birhane, Sara Kingsley, George Obaido, Sekou L. Remy, Swathi Sadagopan
View Paper

Abstract

As machine learning and data science applications grow ever more prevalent, there is an increased focus on data sharing and open data initiatives, particularly in the context of the African continent. Many argue that data sharing can support research and policy design to alleviate poverty, inequality, and derivative effects in Africa. Despite the fact that the datasets in question are often extracted from African communities, conversations around the challenges of accessing and sharing African data are too often driven by non-African stakeholders. These perspectives frequently employ a deficit narratives, often focusing on lack of education, training, and technological resources in the continent as the leading causes of friction in the data ecosystem. We argue that these narratives obfuscate and distort the full complexity of the African data sharing landscape. In particular, we use storytelling via fictional personas built from a series of interviews with African data experts to complicate dominant narratives and to provide counternarratives. Coupling these personas with research on data practices within the continent, we identify recurring barriers to data sharing as well as inequities in the distribution of data sharing benefits. In particular, we discuss issues arising from power imbalances resulting from the legacies of colonialism, ethno-centrism, and slavery, disinvestment in building trust, lack of acknowledgement of historical and present-day extractive practices, and Western-centric policies that are ill-suited to the African context. After outlining these problems, we discuss avenues for addressing them when sharing data generated in the continent.

The Ethics of Emotion in Artificial Intelligence Systems

Luke Stark, Jesse Hoey
View Paper

Abstract

In this paper, we develop a taxonomy of conceptual models and proxy data used for digital analysis of human emotional expression and outline how the combinations and permutations of these models and data impact their incorporation into artificial intelligence (AI) systems. We argue we should not take computer scientists at their word that the paradigms for human emotions they have developed internally and adapted from other disciplines can produce ground truth about human emotions; instead, we ask how different conceptualizations of what emotions are, and how they can be sensed, measured and transformed into data, shape the ethical and social implications of these AI systems.

The Algorithmic Leviathan: Arbitrariness, Unfairness, and Opportunity in Algorithmic Decision Making Systems

Kathleen Creel, Deborah Hellman
View Paper

Abstract

Automated decision-making systems implemented in public life are typically standardized. One algorithmic decision-making system can replace thousands of human deciders. Each of the humans so replaced had her own decision-making criteria: some good, some bad, and some arbitrary. Is such arbitrariness of moral concern? We argue that an isolated arbitrary decision need not morally wrong the individual whom it misclassifies. However, if the same algorithms are applied across a public sphere, such as hiring or lending, a person could be excluded from a large number of opportunities. This harm persists even when the automated decision-making systems are “fair” on standard metrics of fairness. We argue that such arbitrariness at scale is morally problematic and propose technically informed solutions that can lessen the impact of algorithms at scale and so mitigate or avoid the moral harms we identify.

Fairness, Equality, and Power in Algorithmic Decision-Making

Maximilian Kasy, Rediet Abebe
View Paper

Abstract

Much of the debate on the impact of algorithms is concerned with fairness, defined as the absence of discrimination for individuals with the same ``merit." Drawing on the theory of justice, we argue that leading notions of fairness suffer from three key limitations: they legitimize inequalities justified by ``merit;" they are narrowly bracketed, considering only differences of treatment within the algorithm; and they consider between-group and not within-group differences. We contrast this fairness-based perspective with two alternate perspectives: the first focuses on inequality and the causal impact of algorithms and the second on the distribution of power. We formalize these perspectives drawing on techniques from causal inference and empirical economics, and characterize when they give divergent evaluations. We present theoretical results and empirical examples, demonstrating this tension. we further use these insights to present a guide for algorithmic auditing. We discuss the importance of inequality and power-centered frameworks in algorithmic decision-making.

Live Q&A Recording

This live session has not been uploaded yet. Check back soon or check out the live session.