Conference Content

Learn more about who, what, when, and where.
2 of 4
Next

In & Out of the Hub

This is your portal to access more content in the Virtual Hub.
1 of 4
Next

Live Content

On these streams you will find live content happening during the conference.
4 of 4
Close

Connect with others

These spaces are built specifically for you to connect with others at the conference.
3 of 4
Next
← Back to Agenda
Tutorial
March
4
14:00
-
15:30
UTC
Join Meeting
Add to Calendar 3/4/21 14:00 3/4/21 15:30 UTC Explainable ML in the Wild:  When Not to Trust Your Explanations Check out this session on the FAccT Hub. https://2021/facctconference.org/conference-agenda/explainable-ml-in-the-wild-when-not-to-trust-your-explanations
Technical / Methods Track

Explainable ML in the Wild:  When Not to Trust Your Explanations

Shalmali Joshi, Chirag Agarwal, Himabindu Lakkaraju
Join the Conversation
Checkout Our Tutorial Page

Abstract

Machine learning (ML) and other computational techniques are increasingly being deployed in high-stakes decision-making. The process of deploying such automated tools to make decisions which affect the lives of individuals and society as a whole, is complex and rife with uncertainty and rightful skepticism. Explainable ML (or broadly XAI) is often pitched as a panacea for managing this uncertainty and skepticism. While technical limitations of explainability methods are being characterized formally or otherwise in ML literature, the impact of explanation methods and their limitations on end users and important stakeholders (e.g., policy makers, judges, doctors) is not well understood. We propose a translation tutorial to contextualize explanation methods and their limitations for such end users. We further discuss overarching ethical implications of these technical challenges beyond misleading and wrongful decision-making. While we will focus on implications to applications in finance, clinical healthcare, and criminal justice, our overarching theme should be valuable for all stakeholders of the FAccT community. Our primary objective is that such a tutorial will be a starting point for regulatory bodies, policymakers, and fiduciaries to engage with explainability tools in a more sagacious manner.

Recorded Live Session

This live session has not been uploaded yet. Check back soon or check out the live session.