Conference Content

Learn more about who, what, when, and where.
2 of 4
Next

In & Out of the Hub

This is your portal to access more content in the Virtual Hub.
1 of 4
Next

Live Content

On these streams you will find live content happening during the conference.
4 of 4
Close

Connect with others

These spaces are built specifically for you to connect with others at the conference.
3 of 4
Next
← Back to Agenda
Paper
Paper
Paper
March
8
20:00
-
21:45
UTC
Add to Calendar 03/08/2021 08:00 PM 03/08/2021 09:45 PM UTC Paper Session 5 Check out this session on the FAccT Hub. https://2021.facctconference.org/conference-agenda/session-5
Track One

Paper Session 5

Session Chair:
Moderator:
Discussant:
Sanmi Koyejo
No items found.

Abstract

Ask a Question?
Join the Conversation

Reasons, Values, Stakeholders: A Philosophical Framework for Explainable AI

Atoosa Kasirzadeh
View Paper

Abstract

The societal and ethical implications of the use of opaque artificial intelligence systems for consequential decisions, such as welfare allocation and criminal justice, have generated a lively debate among multiple stakeholder groups, including computer scientists, ethicists, social scientists, policy makers, and end users. However, the lack of a common language or a multi-dimensional framework to appropriately bridge the technical, epistemic, and normative aspects of this debate prevents the discussion from being as productive as it could be. Drawing on the philosophical literature on the nature and value of explanations, this paper offers a multi-faceted framework that brings more conceptual precision to the present debate by (1) identifying the types of explanations that are most pertinent to artificial intelligence predictions, (2) recognizing the relevance and importance of social and ethical values for the evaluation of these explanations, and (3) demonstrating the importance of these explanations for incorporating a diversified approach to improving the design of truthful algorithmic ecosystems. The proposed philosophical framework thus lays the groundwork for establishing a pertinent connection between the technical and ethical aspects of artificial intelligence systems.

High Dimensional Model Explanations: An Axiomatic Approach

Neel Patel, Martin Strobel, Yair Zick
View Paper

Abstract

Complex black-box machine learning models are regularly used in critical decision-making domains. This has given rise to several calls for algorithmic explainability. Many explanation algorithms proposed in literature assign importance to each feature individually. However, such explanations fail to capture the joint effects of sets of features. Indeed, few works so far formally analyze high dimensional model explanations. In this paper, we propose a novel high dimension model explanation method that captures the joint effect of feature subsets. We propose a new axiomatization for a generalization of the Banzhaf index; our method can also be thought of as an approximation of a black-box model by a higher-order polynomial. In other words, this work justifies the use of the generalized Banzhaf index as a model explanation by showing that it uniquely satisfies a set of natural desiderata and that it is the optimal local approximation of a black-box model. Our empirical evaluation of our measure highlights how it manages to capture desirable behavior, whereas other measures that do not satisfy our axioms behave in an unpredictable manner.

How Can I Choose an Explainer? An Application-Grounded Evaluation of Post-hoc Explanations

Sérgio Jesus, Catarina Belém, Vladimir Balayan, João Bento, Pedro Saleiro, Pedro Bizarro, João Gama
View Paper

Abstract

There have been several research works proposing new Explainable AI (XAI) methods designed to generate model explanations having specific properties, or desiderata, such as fidelity, robustness, or human-interpretability. However, explanations are seldom evaluated based on their true practical impact on decision-making tasks. Without that assessment, explanations might be chosen that, in fact, hurt the overall performance of the combined system of ML model + end-users. This study aims to bridge this gap by proposing XAI Test, an application-grounded evaluation methodology tailored to isolate the impact of providing the end-user with different levels of information. We conducted an experiment following XAI Test to evaluate three popular post-hoc explanation methods -- LIME, SHAP, and TreeInterpreter -- on a real-world fraud detection task, with real data, a deployed ML model, and fraud analysts. During the experiment, we gradually increased the information provided to the fraud analysts in three stages: Data Only, i.e., just transaction data without access to model score nor explanations, Data + ML Model Score, and Data + ML Model Score + Explanations. Using strong statistical analysis, we show that, in general, these popular explainers have a worse impact than desired. Some of the conclusion highlights include: i) showing Data Only results in the highest decision accuracy and the slowest decision time among all variants tested, ii) all the explainers improve accuracy over the Data + ML Model Score variant but still result in lower accuracy when compared with Data Only; iii) LIME was the least preferred by users, probably due to its substantially lower variability of explanations from case to case.

Impossible Explanations? Beyond Explainable AI in the GDPR from a COVID-19 Use Case Scenario

Ronan Hamon, Henrik Junklewitz, Gianclaudio Malgieri, Paul De Hert, Laurent Beslay, Ignacio Sanchez
View Paper

Abstract

Can we achieve an adequate level of explanation for complex machine learning models in high-risk AI applications when applying the EU data protection framework? In this article, we address this question, analysing from a multidisciplinary point of view the connection between existing legal requirements for the explainability of AI systems and the current state of the art in the field of explainable AI. We present a case study of a real-life scenario designed to illustrate the application of an AI-based automated decision making process for the medical diagnosis of COVID-19 patients. The scenario exemplifies the trend in the usage of increasingly complex machine-learning algorithms with growing dimensionality of data and model parameters. Based on this setting, we analyse the challenges of providing human legible explanations in practice and we discuss their legal implications following the General Data Protection Regulation (GDPR). Although it might appear that there is just one single form of explanation in the GDPR, we conclude that the context in which the decision-making system operates requires that several forms of explanation are considered. Thus, we propose to design explanations in multiple forms, depending on: the moment of the disclosure of the explanation (either ex ante or ex post); the audience of the explanation (explanation for an expert or a data controller and explanation for the final data subject); the layer of granularity (such as general, group-based or individual explanations); the level of the risks of the automated decision regarding fundamental rights and freedoms. Consequently, explanations should embrace this multifaceted environment. Furthermore, we highlight how the current inability of complex, deep learning based machine learning models to make clear causal links between input data and final decisions represents a limitation for providing exact, human-legible reasons behind specific decisions. This makes the provision of satisfactorily, fair and transparent explanations a serious challenge. Therefore, there are cases where the quality of possible explanations might not be assessed as an adequate safeguard for automated decision-making processes under Article 22(3) GDPR. Accordingly, we suggest that further research should focus on alternative tools in the GDPR (such as algorithmic impact assessments from Article 35 GDPR or algorithmic lawfulness justifications) that might be considered to complement the explanations of automated decision-making.

Live Q&A Recording

This live session has not been uploaded yet. Check back soon or check out the live session.