Food, Bias, and Justice: a Case for Statistical Prediction Rules
April 14, 2011 7 Comments
We’re remarkably bad at making good decisions. Even when we know what goal we’re pursuing, we make mistakes predicting which actions will achieve it. Are there strategies we can use to make better policy decisions? Yes – we can gain insight by looking at cognitive science.
On the surface all we need to do is experience the world and figure out what does and doesn’t work at achieving goals (the focus of instrumental rationality). That’s why we tend to respect expert opinion: they have a lot more experience on an issue and have considered/evaluated different approaches.
Let’s take the example of deciding whether or not to grant prisoners parole. If the goal is to reduce repeat offenses, we tend to trust a panel of expert judges who evaluate the case and use their subjective opinion. They’ll do a good job, or at least as good a job as anyone else, right? Well… that’s the problem: everyone does a pretty bad job. Quite frankly, even experts’ decision-making is influenced by factors that are unrelated to the matter at hand. Ed Yong calls attention to a fascinating study which finds that a prisoner’s chance of being granted parole is strongly influenced by when their case is heard in relation to the judges’ snack breaks:
The graph is dramatic. It shows that the odds that prisoners will be successfully paroled start off fairly high at around 65% and quickly plummet to nothing over a few hours (although, see footnote). After the judges have returned from their breaks, the odds abruptly climb back up to 65%, before resuming their downward slide. A prisoner’s fate could hinge upon the point in the day when their case is heard.
Curse our fleshy bodies and their need for “Food” and “breaks”! It’s obviously a problem that human judgment is influenced by irrelevant, quasi-random factors. How can we counteract those effects?
Statistical Prediction Rules do better
Fortunately, we have science and statistics to help. We can objectively record evidential cues, look at the resulting target property, and find correlations. Over time, we can build an objective model, meat-brain limitations out of the way.
This was the advice of Bishop and Trout in “Epistemology and the Psychology of Human Judgment“, an excellent book recommended by Luke Muehlhauser of Common Sense Atheism (and a frequent contributor to Less Wrong).
Bishop and Trout argued that we should use such Statistical Prediction Rules (SPRs) far more often than we do. Not only are they faster, it turns out they’re more trustworthy: Using the same amount of information (or often less) a simple mathematical model consistently out-performs expert opinion.
They point out that when Grove and Meehl did a survey of 136 different studies comparing an SPR to the expert opinion, they found that “64 clearly favored the SPR, 64 showed approximately equivalent accuracy, and 8 clearly favored the clinician.” The target properties the studies were predicting varied from medical diagnoses to academic performance to – yup – parole violation and violence.
So based on some cues, a Statistical Prediction Rule would probably give a better prediction than the judges on whether a prisoner will break parole or commit a crime. And they’d do it very quickly – just by putting the numbers into an equation! So all we need to do is show the judges the SPRs and they’ll save time and do a better job, right? Well, not so much.
Read more and comment: