Thinking in greyscale
May 23, 2011 30 Comments
Have you ever converted an image from greyscale into black and white? Basically, your graphics program rounds all of the lighter shades of grey down to “white,” and all of the darker shades of grey up to “black.” The result is a visual mess – same rough shape as the original, but unrecognizable.
Something similar happens to our mental picture of the world whenever we talk about how we “believe” or “don’t believe” an idea. Belief isn’t binary. Or at least, it shouldn’t be. In reality, while we can be more confident in the truth of some claims than others, we can’t be absolutely certain of anything. So it’s more accurate to talk about how much we believe a claim, rather than whether or not we believe it. For example, I’m at least 99% sure that the moon landing was real. My confidence that mice have the capacity to suffer is high, but not quite as high. Maybe 85%. Ask me about a less-developed animal, like a shrimp, and my confidence would fall to near-uncertainty, around 60%.
Obviously there’s no rigorous, precise way to assign a number to how confident you are about something. But it’s still valuable to get in the habit, at least, of qualifying your statements of belief with words like “probably,” or “somewhat,” or “very.” It just helps keep you thinking in greyscale, and reminds you that different amounts of evidence should yield different degrees of belief. Why lose all that resolution unnecessarily by switching to black and white?
More importantly, the reason you shouldn’t ever have 0% or 100% confidence in any empirical claim is because that implies that there is no conceivable evidence that could ever make you change your mind. You can prove this formally with Bayes’ theorem, which is a simple rule of probability that also serves as a way of describing how an ideal reasoner would update his belief in some hypothesis “H” after encountering some evidence “E.” Bayes’ theorem can be written like this:
… in other words, it’s a rule for how to take your prior probability of a hypothesis, P[H], and update it based on new evidence [E] to get the probability of H given that evidence: P[H | E].
So what happens if you think there’s zero chance of some hypothesis H being true? Well, just plug in zero for “P[H],” all the way on the right, and you’ll realize that the entire equation becomes zero (because zero times anything is zero). So you don’t have to know any of the other terms to conclude that P[H | E] = 0. That means that if you start out with zero belief in a hypothesis, you’ll always have zero belief in that hypothesis no matter what evidence comes your way.
And what if you start out convinced, beyond a shadow of a doubt, that some hypothesis is true? That’s akin to saying that P[H] = 1. That also implies you must put zero probability on all the other possible hypotheses. So plug in 1 for P[H] and 0 for P[not H] in the equation above. With just a bit of arithmetic you’ll find that P[H | E] = 1. Which means that no matter what evidence you come across, if your belief in a hypothesis is 100% before seeing some evidence (that is, P[H] = 1) then your belief in that hypothesis will still be 100% after seeing that evidence (that is, P[H | E] = 1).
As much as I’m in favor of thinking in greyscale, however, I will admit that it can be really difficult to figure out how to feel when you haven’t committed yourself wholeheartedly to one way of viewing the world. For example, if you hear that someone has been accused of rape, your estimation of the likelihood of his guilt should be somewhere between 0 and 100%, depending on the circumstances. But we want, instinctively, to know how we should feel about the suspect. And the two possible states of the world (he’s guilty/he’s innocent) have such radically different emotional attitudes associated with them (“That monster!”/”That poor man!”). So how do you translate your estimated probability of his guilt into an emotional reaction? How should you feel about him if you’re, say, 80% confident he’s guilty and 20% confident he’s innocent? Somehow, finding a weighted average of outrage and empathy doesn’t seem like the right response — and even if it were, I have no idea what that would feel like.
Neat! I personally like thinking in terms of probabilities — I try to put it in terms of relative surprise: would I be more surprised for a public figure accused of cheating on his wife turn out to innocent than I would to see three dice come up 6-6-6? OK, then, my belief in his innocence is somewhere south of half a percent.
Interestingly, Harold Camping was fond of saying there was absolutely no chance whatsoever that he was wrong about the rapture taking place on May 21st. And yet, by May 23rd, he pronounced himself “flabbergasted” that it hadn’t! By your reasoning, shouldn’t he still be maintaining that it had, in fact, occurred, but that for some reason we couldn’t see it?
Great points — I’m planning on doing a post soon about how to estimate probabilities (and as you suggested, using comparisons is a great idea).
Cases like Harold Camping are tricky… it certainly seems to suggest that people can *think* they’re 100% certain but not actually be 100% certain. But that is a weird way of using the word “certain,” though, isn’t it? It’s like saying someone can think they’re in pain but not actually be in pain.
Right. (I haven’t seen this blog before so I don’t know the prior art but…) To expand on your answer a bit, the only obvious way to guess at what Camping’s ‘actual’ or anticipation-controlling confidence was before the Rapture is by imagining from a sparsely detailed model of his psychology what kinds of bets he would have been prepared to make if he thought something of truly great import, like souls, were on the line. If we think he cares a lot about his reputation, for selfish reasons or because having a better reputation makes it easier for him to help people save themselves, then it does seem as if he was willing to go pretty far out on a limb and bet a lot of utility on his belief. Camping’s prediction thus shows some virtue, in that sense; if he didn’t make clear and important predictions beforehand then he’d never learn if his methods of reasoning could generally make correct predictions about the future or not, instead of simply ‘explaining’ everything after the fact.
(Unfortunately, when psychologically something feels very important and truthful to you, and you can draw a plausible chain from that important truthful thing to an intuitively plausible prediction or truth that would also be very important, it is hard to not become overconfident that the whole causal chain is as waterproof as it feels like it must be. That is where Camping erred.)
To ‘answer’ Barry’s question: when humans assign so little probability to an even that doesn’t actually occur, they often don’t have a psychologically appealing hypothesis they can fall back on. Your hypothesis predicted black, but the world spat out white, and thus it doesn’t feel like any hypothesis that could honestly have predicted white is at all a credible hypothesis. Camping seemed to honestly just not expect that the Rapture wouldn’t obviously demonstrably occur. Now he’s just confused, since no potential hypothesis looks close enough to be like the original hypothesis he had (which in his mind came directly from the Bible, which is infallible), but could still be seen as having been justified before the evidence came in.
He (apparently thus far at least) demonstrates another important virtue here, which is the virtue of noticing confusion, instead of simply rationalizing the result immediately. He might go back and find an error in his calculations, but I suspect that he’s actually going to go back and check, instead of using that as just a social excuse. In fact, Camping shows off many virtues of a good scientist in forming a falsifiable hypothesis and honestly accepting reality’s retort. That you can be so virtuously scientific in a few socially obvious ways and yet so extremely far from sanity is humbling for the rest of us as well as Camping.
-Do you, Julia Galef, take _____ to be your lawful wedded husband, to have and to hold, from this day forward, for better or for worse, for richer or for poorer, in sickness and in health for as long as you both shall live?
-I’m 50% sure I do.
Max, out of all the possible names I could put in that blank, I think “Tim Minchin” is the one who would be the most cool with the statistical wedding vow.
“Your love is one in a million… but of the 9.999 hundred thousand other possible loves, statistically some of them would be equally nice.”
http://twentytwowords.com/2011/03/10/funniest-song-youll-hear-today-tim-minchins-statistically-accurate-love-song/
Is it possible that when somebody like Camping say they are “100% sure” that the statement is best viewed through the lens of colloquial speech rather than a more rigorous mathematico-scientific lens?
I think nearly everybody accepts that there is something which would gainsay their certainty for $Whatever and when we say “I’m absolutely certain” we really mean “The chances of chances of me being wrong, while existing, I think they are really really small”. We’ve made the step over that final epsilon, to borrow a phrase, and it’s little bit rhetoric, a little bit hyperbole the way we acknowledge that.
Even in talking with the most ardent creationists over the years, I’ve never come across anybody who answered the question “Can you think of something that would change your mind?” with “Nothing”. (Insert anecdote caveat here.)
What would change the most ardent Creationist’s mind? God telling him that Evolution is true?
That’s funny, a few weeks ago I was trying to come up with rational wedding vows. Here’s what I got: “…To have and to hold for an indefinite period of time because you cannot envision a probable scenario in which you find them intolerable.”
Ideally, in the criminal justice system the punishment is proportional to the crime, but first the probability of the crime must pass a threshold: beyond a reasonable doubt. So a 99.9%-likely petty thief may be lightly punished, while an 80%-likely serial rapist may be let free.
But then it gets mixed when people argue that a severe punishment like the death penalty requires a higher standard of evidence.
Now, let’s say we get rid of the threshold, and make the punishment proportional to both the crime and the probability of the crime. If you pick a random person on the street, there’s a chance he’s guilty of some crime, and therefore deserves a slap on the wrist :-p
Which means that no matter what evidence you come across, if your belief in a hypothesis is 100% before seeing some evidence (that is, P[H] = 1) then your belief in that hypothesis will still be 100% after seeing that evidence (that is, P[H | E] = 1).
What if you see evidence that is completely inconsistent with your hypothesis, i.e. P[E|H]=0 (and P[H]=1)? Bayes’ formula for P[H|E] gives 0/0 in this case.
Good question, alex.
Actually, I just realized your question is another (formal) way of posing Barry’s question, above: Harold Camping said that he was 100% convinced the world would end May 21st, but then the world didn’t end. So basically his P(H) was 1, but he then encountered evidence that would be impossible under H (P[H|E] = 0).
I asked my mathematician friend and he said: If P(H)=1 and P(E|H)=0, then P(E) = 0. And it’s simply a logical impossibility to have E occur when P(E)=0, and so Bayes’ rule doesn’t apply.
I am a mathematician myself, so I just can’t resist quibbling with this statement:
And it’s simply a logical impossibility to have E occur when P(E)=0
Not a logical impossibility: generate a random variable U which is uniform on [0,1]. Then P(U=x)=0 for any x, and yet some U=x is going to occur.
I think that if you believe in quantum mechanics, its not even a physical impossibility, i.e. probability zero events occur in the real world (because solutions to Shrodinger’s equation are continuous). I’m open to being corrected on this point by a knowledgeable physicist.
Anyway, whether or not probability zero events occur in the real world is a bit of a red herring; it certainly does happen that people believe certain events are impossible and then they occur anyway (e.g., Camping).
I believe Bayes’ formula is of no help in figuring out how to update beliefs in this situation. All of this is to say that I think your statement – that an ideal reasoner which believes H with 100% certainty will always do so regardless of new evidence – at the very least requires more justification to deal with this case.
With a continuous distribution, you use its probability density function, like pdf(U=x)=1 for x between 0 and 1, and pdf(U=x)=0 for all other x. In that case, observing x=0.3 is possible, but observing x=2 is impossible.
Alex, that’s true — I was implicitly assuming that there was a finite number of events.
I suppose you could have an infinite number of discrete events, although not a uniform distribution over them… but you could still only apply Bayes’ rule to those events with a nonzero probability. And if you wanted to apply Bayes’ rule when you’re dealing with events with zero probability, you might be able to do that if you reformulate the rule using probability density functions.
I can’t help but quibble – Belief is binary and can’t be otherwise; credence isn’t binary and can’t be. You either do or do not believe the moon to be made of cheese; your credence in it is represented on a scale from 0 to 1. Ignoring this point obscures a very interesting philosophical problem in epistemology (and decision theory): how to relate the standards governing rational belief to the standards governing rational credence (e.g., Bayes’ theorem). You can try to solve this problem by eliminating beliefs from your theory of mind (Jeffrey’s solution, if I recall), by saying that belief just is having some specific level of credence in a proposition (collapsing the one set of norms into the other), or by some third solution. Sadly, there are good reasons to reject both the first and second proposals, leaving us in a bit of a pickle… (For the interested, there’s an excellent discussion of these matters in Fantl & McGrath’s “Knowledge in an Uncertain World”).
Julia,
I enjoyed the post. it was a fun read. But, I want to second Vesuvium’s worry, though I think it isn’t much more than a desire for linguistic organizing.
It seems that certainty and belief are being conflated here and a distinction needs to be drawn between the two concepts. Our assenting to the truth of a proposition is all that belief amounts to and it’s not like we can kind of assent to a proposition or kind of not assent to a proposition. Either you assent or you don’t there is no inbetween, either you believe or you don’t believe. On the other hand you believe P with more or less certainty. Certainty (which I think roughly corresponds to how you used ‘confidence’) does come in degrees.
Think of it like assertion. You can’t kinda asset that P or kinda not assert that P. But, you can more or less boldly assert that. Believing works just like asserting and asserting more or less boldly is like believing with more or less certainty. Certainty is a property of believing which I think is what you seem to be getting at (or maybe just some other general property of belief). It is important though to distinguish these concepts because it seems it is important in understanding where the Cartesian skeptics go wrong.
I had one other question for you though. You stated we shouldn’t ever have 100% confidence in an empirical belief. But I am wondering how this would apply to necessary a posteriori truths like waters being H2O. It seems that the necessity of this fact is enough to seem like it is at least permissible to put 100% confidence in believing its being the case.
Pain and stress in crustaceans?
Converting fingerprints from greyscale into black and white can make them cleaner and more recognizable.
Greyscale: http://en.wikipedia.org/wiki/File:Fingerprint_Loop.jpg
B&W: http://en.wikipedia.org/wiki/File:Fingerprint_picture.svg
Clever. I’d agree that there’s some tradeoff we make between intelligibility and accuracy. So in some cases, we might be able to sacrifice some resolution in our probabilistic picture of the world, but in return we might gain a clear holistic picture of how the world works which we might not have noticed if we were fully thinking in greyscale. That would be the analogy to your fingerprint case, I think.
I don’t have any idea off hand of what kind of situations would benefit from a “cleaning up the fingerprint” approach rather than my original “preserve maximum resolution” approach… but I would guess that the former is useful if you’re trying to formulate some grand overarching theory of how the world works, a la Marxism or Freudianism. But those grand overarching theories don’t have a great track record.
A lot of image processing boils down to filtering followed by thresholding. Like, to detect faces in a photo, you might design a filter that generates a new image where pixel intensity indicates the probability that the corresponding pixel in the photo is NOT part of a face. Then, you can decide that p<0.05 indicates faces, so it's a hypothesis test with a 5% statistical significance threshold.
Even when your beliefs are on a continuous scale, decisions tend to be discrete. Either convict or acquit the defendant. Either take a full course of antibiotics or none. Either vote for one candidate or the other.
If the accused is a rapist, I desire to feel the appropriate emotions when contemplating a rapist. If the accused is not a rapist, I desire to feel the appropriate emotions when contemplating an innocent accused of rape.
(modified http://wiki.lesswrong.com/wiki/Litany_of_Tarski)
Pingback: RadarLake » Belief
Pingback: Linkblogging For 29/05/11 (warning, contains rant including very offensive swear word) « Sci-Ence! Justice Leak!
Here’s an entire thesis:
“Verbal Probability Expressions in National Intelligence Estimates”
http://www.scribd.com/doc/2959133/VERBAL-PROBABILITY-EXPRESSIONS-IN-NATIONAL-INTELLIGENCE-ESTIMATES
It also talks about weather, finance, and medicine.
Pingback: Asking for reassurance: a Bayesian interpretation « Measure of Doubt
Pingback: Truth Discernment Can be a Super Power | Spencer Greenberg
Pingback: Being a Dick is not Binary | Friendly Atheist
Pingback: Being a Dick is not Binary « Measure of Doubt