# De Finetti’s Game: How to Quantify Belief

August 15, 2011 9 Comments

What do people really mean when they say they’re “sure” of something? Everyday language is terrible at describing actual levels of confidence – it lumps together different degrees of belief into vague groups which don’t always match from person to person. When one friend tells you she’s “pretty sure” we should turn left and another says he’s “fairly certain” we should turn right, it would be useful to know how confident they each are.

Sometimes it’s enough to hear your landlord say she’s pretty sure you’ll get towed from that parking space – you’d move your car. But when you’re basing an important decision on another person’s advice, it would be better describe confidence on an objective, numeric scale. It’s not necessarily easy to quantify a feeling, but there’s a method that can help.

Bruno de Finetti, a 20th-century Italian mathematician, came up with a creative idea called de Finetti’s Game to help connect the feeling of confidence to a percent (hat tip Keith Devlin in The Unfinished Game). It works like this:

Suppose you’re half a mile into a road trip when your friend tells you that he’s “pretty sure” he locked the door. Do you go back? When you ask him for a specific number, he replies breezily that he’s 95% sure. Use that number as a starting point and begin the thought experiment.

In the experiment, you show your friend a bag with 95 red and 5 blue marbles. You then offer him a choice: he can either pick a marble at random and, if it’s red, win $1 million. Or he can go back and verify that the door is locked and, if it is, get $1 million.

If your friend would choose to draw a marble from the bag, he preferred the 95% chance to win. His real confidence of locking the door must be somewhere below that. So you play another round – this time with 80 red and 20 blue marbles. If he would rather check the door this time, his confidence is higher than 80% and perhaps you try a 87/13 split next round.

And so on. You keep offering different deals in order to hone in on the level where he feels equally comfortable selecting a random marble and checking the door. That’s his real level of confidence.

The thought experiment should guide people through the tricky process of connecting their feeling of confidence to a corresponding percent. The answer will still be somewhat fuzzy – after all, we’re still relying on a feeling that one option is better than another.

It’s important to remember that the game doesn’t tell us how likely we are to BE right. It only tells us about our confidence – which can be misplaced. From cognitive dissonance to confirmation bias there are countless psychological influences messing up the calibration between our confidence level and our chance of being right. But the more we pay attention to the impact of those biases, the more we can do to compensate. It’s a good practice (though pretty rare) to stop and think, “Have I really been as accurate as I would expect, given how confident I feel?”

I love the idea of measuring people’s confidence (and not just because I can rephrase it as measuring their doubt). I just love being able to quantify things! We can quantify exactly how much a new piece of evidence is likely to affect jurors, how much a person’s suit affects their persuasive impact, or how much confidence affects our openness to new ideas.

We could even use de Finetti’s Game to watch the inner workings of our minds doing Bayesian updating. Maybe I’ll try it out on myself to see how confident I feel that the Ravens will win the Superbowl this year before and after the Week 1 game against the rival Pittsburgh Steelers. I expect that my feeling of confidence won’t shift quite in accordance with what the Bayesian analysis tells me a fully rational person would believe. It’ll be fun to see just how irrational I am!

I like the emphasis on quantification here. It makes me think of the weather reports that give 90% as the chance of rain — you never see any follow-up on how often it rains when they say there’s a 90% chance,and yet that shouldn’t be a hard thing to check out.

Unlike most of us in our daily lives, I suspect the weather forecasters pay close attention to how well-correlated their confidence is to their results. After all, they have complicated computer models to do exactly that.

If our impression is that the forecasters aren’t giving an accurate picture, the problem might be on our end! We could remember the times they got it wrong but cheerfully forget all the times they’re right.

But, like you said, it would be an easy (and possibly fun) thing to check out!

That should reduce the number of “100% certain” estimates.

Put this into practice at http://www.forecastingace.com

There’s a whole thesis on verbal probability expressions.

http://www.scribd.com/doc/2959133/VERBAL-PROBABILITY-EXPRESSIONS-IN-NATIONAL-INTELLIGENCE-ESTIMATES

This is a very elegant way of quantifying the perceived odds, and extremely interesting as an academic exercise when it is not your own metaphorical door. In reality, if there is even a 5% chance you left the door unlocked, you have to go back. Unless it doesn’t matter, in which case the response would not have been a percentage, but “Who cares?”

If there is a 20% chance of rain, you still need to take an umbrella, or cancel the picnic, n’est-ce pas?

Psychologists have used de Finetti’s technique, and they’ve generally found that people’s gambling choices match their stated probabilities. For a recent example, see

Williams, E. & Gilovich, T. (2008). Do people really believe they are above average? Journal of Experimental Social Psychology, 44, 1121-1128.

Jesse, in order to correctly measure your rationality w.r.t. the Ravens’ chances at the championship, you’d have to model the structure of your football prior….almost any update is rational under some set of priors!

This is a brilliant idea, Jesse! I can’t believe I didn’t think of it before.

Pingback: Friday Links (30-Mar-12) -- a Nadder!

The concept of revealing utility preferences is good, but game as stated doesn’t work because the $1 million payoff is arbitrary. For example, for a 5% extra chance to win $1 million, I’d definitely give up my vacation, but for a 5% extra chance to win $10.00, I’d go on to my destination. The stuff in my apartment is worth a lot closer to $0.50 than it is to $50,000. You’d come much closer if you set the payoff equal to the expected value of the loss if some mishap were to occur, assuming that the door is unlocked. But here’s a question for you–let’s say you rephrased the question as a probability of losing money rather than as a probability of gaining money. Do you think you’d get on the same answer?