Rapports d'événement Watch the video: Moral principles for Evaluating Fairness Metrics in AI
A conference by Derek Leben, Professor of philosophy, University of Pittsburgh at Johnstown
Abstract: Machine learning (ML) algorithms are increasingly being used in both the public and private sectors to make decisions about jobs, loans, college admissions, and prison sentences. The appeal of ML algorithms is clear; they can vastly increase the efficiency, accuracy, and consistency of decisions. However, because the training data for ML algorithms contains discrepancies caused by historical injustices, these algorithms often reveal biases towards historically oppressed groups. The field of "Fairness, Accountability, and Transparency in Machine Learning" (FAT ML) has developed several metrics for determining when such bias exists, but satisfying fairness in all of these metrics is mathematically impossible, and some of them require large sacrifices to the accuracy of ML algorithms. I propose that we can make progress on evaluating fairness metrics by drawing on traditional principles from moral and political philosophy. These principles are largely designed around the problem of determining a fair distribution of resources, such as Egalitarianism, Libertarianism, Desert-Based Approaches, Intention-Based Approaches, and Consequentialism. My goal is to describe in detail how each of these approaches will favor a particular set of fairness metrics for evaluating ML algorithms.