Philosophy / Law Track
Fairness Metrics and Non-Discrimination Law: Can Fairness be Legally Automated?
Western societies are marked by diverse and extensive biases and inequality that are unavoidably embedded in the data used to train machine learning. Algorithms trained on biased data will, without intervention, produce biased oUCTomes and increase the inequality experienced by historically disadvantaged groups. Recognising this problem, much work has emerged in recent years to test for bias in machine learning and AI systems using various bias metrics. In this paper we assessed the compatibility of technical fairness metrics and tests used in machine learning against the aims and purpose of EU non-discrimination law. We provide concrete recommendations including a user-friendly checklist for choosing the most appropriate fairness metric for uses of machine learning under EU non-discrimination law.