Technical / Methods Track
Explainable ML in the Wild: When Not to Trust Your Explanations
Machine learning (ML) and other computational techniques are increasingly being deployed in high-stakes decision-making. The process of deploying such automated tools to make decisions which affect the lives of individuals and society as a whole, is complex and rife with uncertainty and rightful skepticism. Explainable ML (or broadly XAI) is often pitched as a panacea for managing this uncertainty and skepticism. While technical limitations of explainability methods are being characterized formally or otherwise in ML literature, the impact of explanation methods and their limitations on end users and important stakeholders (e.g., policy makers, judges, doctors) is not well understood. We propose a translation tutorial to contextualize explanation methods and their limitations for such end users. We further discuss overarching ethical implications of these technical challenges beyond misleading and wrongful decision-making. While we will focus on implications to applications in finance, clinical healthcare, and criminal justice, our overarching theme should be valuable for all stakeholders of the FAccT community. Our primary objective is that such a tutorial will be a starting point for regulatory bodies, policymakers, and fiduciaries to engage with explainability tools in a more sagacious manner.