G. Milton Wing Lecture

Quantifying Bias in Human and Machine Decisions Part I

Sharad Goel, Stanford University

Tuesday, September 10th, 2019
4:50 PM - 6:00 PM
Sloan Auditorium, Goergen Hall

There’s widespread concern that high-stakes decisions – made both by humans and by algorithms – are biased against groups defined by race, gender, and other protected traits. In a series of two talks, I’ll describe several interrelated threads of research that seek to define, detect, and combat bias in human and machine decisions, drawing on new and old ideas from statistics, computer science, law, and economics. In the first talk (on Tuesday, September 10), I’ll discuss bias in human decisions, and demonstrate that the most popular statistical tests for discrimination can, in practice, yield misleading results. To address this issue, I propose two new methods. The first, which we call the threshold test, is designed to circumvent the problem of “infra-marginality”; the second, which we call risk-adjusted regression, mitigates the problem of “included-variable bias”. I’ll illustrate these techniques on large-scale datasets of police interactions with the public. In the second talk (on Wednesday, September 11), I’ll focus on bias in machine decisions, and similarly show that the most popular measures of algorithmic fairness suffer from deep statistical flaws. I’ll argue that algorithms designed to satisfy those measures can, perversely, harm the very groups they were designed to protect. To demonstrate these ideas, I’ll discuss a class of risk-assessment algorithms used by judges nationwide when setting bail.

These talks synthesize material developed over the last several years in collaboration with many people, including Sam Corbett-Davies, Avi Feller, Aziz Huq, Jongbin Jung, Emma Pierson, Justin Rao, Ravi Shroff, and Camelia Simoiu. Though there are common underlying themes, each of the two talks is self-contained.

Event contact: arjun dot krishnan at rochester dot edu