Conference Content

Learn more about who, what, when, and where.
2 of 4
Next

In & Out of the Hub

This is your portal to access more content in the Virtual Hub.
1 of 4
Next

Live Content

On these streams you will find live content happening during the conference.
4 of 4
Close

Connect with others

These spaces are built specifically for you to connect with others at the conference.
3 of 4
Next
← Back to Agenda
Paper
Paper
Paper
March
9
12:00
-
13:50
UTC
Add to Calendar 03/09/2021 12:00 PM 03/09/2021 01:50 PM UTC Paper Session 12 Check out this session on the FAccT Hub. https://2021.facctconference.org/conference-agenda/session-12
Track Two

Paper Session 12

Session Chair:
Moderator:
Discussant:
Shannon Vallor
No items found.

Abstract

Ask a Question?
Join the Conversation

A Semiotics-Based Epistemic Tool to Reason About Ethical Issues in Digital Technology Design and Development

Simone Diniz Junqueira Barbosa, Gabriel Diniz Junqueira Barbosa, Clarisse Sieckenius de Souza, Carla Faria Leitão
View Paper

Abstract

One of the important challenges regarding the development of morally responsible and ethically qualified digital technologies is how to support designers and developers in producing those technologies, especially when conceptualizing their vision of what the technology will be, how it will benefit users, and avoid doing harm. However, traditional software design and development life cycles do not explicitly support the reflection upon either ethical or moral issues. In this paper we look at how a number of ethical issues may be dealt with during digital technology design and development, to prevent damage and improve technological fairness, accountability, and transparency. Starting from mature work on semiotic theory and methods in human-computer interaction, we propose to extend the core artifact used in semiotic engineering of human-centered technology design, so as to directly address moral responsibility and ethical issues. The resulting extension is an epistemic tool, that is, an instrument to create and elaborate on this specific kind of knowledge. The paper describes the tool, illustrates how it is to be used, and discusses its promises and limitations against the background of related work. It also includes proposed empirical studies, accompanied by briefly described methodological challenges and considerations that deserve our attention.

Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems

Jennifer Cobbe, Michelle Seng Ah Lee, Jatinder Singh
View Paper

Abstract

This paper introduces reviewability as a framework for improving the accountability of automated and algorithmic decision-making (ADM) involving machine learning. We draw on an understanding of ADM as a socio-technical process involving both human and technical elements, beginning before a decision is made and extending beyond the decision itself. While explanations and other model-centric mechanisms may assist some accountability concerns, they often provide insufficient information of these broader ADM processes for regulatory oversight and assessments of legal compliance. Reviewability involves breaking down the ADM process into technical and organisational elements to provide a systematic framework for determining the contextually appropriate record-keeping mechanisms to facilitate meaningful review - both of individual decisions and of the process as a whole. We argue that a reviewability framework, drawing on administrative law's approach to reviewing human decision-making, offers a practical way forward towards more a more holistic and legally-relevant form of accountability for ADM.

An Action-Oriented AI Policy Toolkit for Technology Audits by Community Advocates and Activists

P M Krafft, Meg Young, Michael Katell, Jennifer E. Lee, Shankar Narayan, Micah Epstein, Dharma Dailey, Bernease Hermann, Aaron Tam, Vivian Guetler, Corinne Bintz, Daniella Raz, Pa Ousman Jobe, Franziska Putz, Brian Robick, Bissan Barghouti
View Paper

Abstract

Motivated by the extensive documented disparate harms of artificial intelligence (AI), many recent practitioner-facing reflective tools have been created to promote responsible AI development. How-ever, the use of such tools internally by technology development firms addresses responsible AI as an issue of closed-door compliance rather than a matter of public concern. Recent advocate and activist efforts intervene in AI as a public policy problem, inciting a growing number of cities to pass bans or other ordinances on AI and surveillance technologies. In support of this broader ecology of political actors, we present a set of reflective tools intended to increase public participation in technology advocacy for AI pol-icy action. To this end, the Algorithmic Equity Toolkit (the AEKit) provides a practical policy-facing definition of AI, a flowchart for assessing technologies against that definition, a worksheet for de-composing AI systems into constituent parts, and a list of probing questions that can be posed to vendors, policy-makers, or government agencies. The AEKit carries an action-orientation towards political encounters between community groups in the public and their representatives, opening up the work of AI reflection and remediation to multiple points of intervention. Unlike current reflective tools available to practitioners, our toolkit carries with it a politics of community participation and activism.

Algorithmic Recourse: from Counterfactual Explanations to Interventions

Amir-Hossein Karimi, Bernhard Schölkopf, Isabel Valera
View Paper

Abstract

As machine learning is increasingly used to inform consequential decision-making (e.g., pre-trial bail and loan approval), it becomes important to explain how the system arrived at its decision, and also suggest actions to achieve a favorable decision. Counterfactual explanations -- "how the world would have (had) to be different for a desirable oUCTome to occur" -- aim to satisfy these criteria. Existing works have primarily focused on designing algorithms to obtain counterfactual explanations for a wide range of settings. However, one of the main objectives of "explanations as a means to help a data-subject act rather than merely understand" has been overlooked. In layman's terms, counterfactual explanations inform an individual where they need to get to, but not how to get there. In this work, we rely on causal reasoning to caution against the use of counterfactual explanations as a recommendable set of actions for recourse. Instead, we propose a shift of paradigm from recourse via nearest counterfactual explanations to recourse through minimal interventions, moving the focus from explanations to recommendations. Finally, we provide the reader with an extensive discussion on how to realistically achieve recourse beyond structural interventions.

Live Q&A Recording

This live session has not been uploaded yet. Check back soon or check out the live session.