Being a Dick is not Binary

(crossposted at Friendly Atheist)

“Should we be offensive?” is a common question in the secular movement. It’s also the wrong question.

The title of this post comes from Phil Plait’s “Don’t be a Dick” talk at TAM 8, which sparked conversation about the wisdom of offending people in the cause of critical thinking. Though it generated the most attention, it’s not the first time we’ve asked these questions: Should we condemn people for opposing LGBT rights? Mock people for believing in creationism? Call religion a delusion? Sometimes it seems like everything we do offends people – even the simple act of advertising our existence offended Iowa Governor Chet Culver.

In the face of that, it’s almost liberating, isn’t it? If everything we do is offensive, it doesn’t matter anymore – we can stop worrying about it. In fact, I used to argue that myself! When confronted with accusations that Everybody Draw Muhammad Day was offensive, I’d point to the bus ads and billboards and say, “People get offended at the most mundane things. We can’t let that hold us back.”

But offensiveness not a simple yes-or-no issue. Like Julia wrote a few months ago, it’s tempting to treat belief as a black and white matter. It’s not – we can hold beliefs with differing degrees of confidence, and if we treat it otherwise we lose a lot of power to make distinctions, see nuance, and chart the best course of action. It’s the same with asking whether or not to be offensive. We need to add nuance.

At the first level, it’s probably more helpful to phrase the question “How many people are my actions likely to offend?” Not all offensive statements are equal. Sure, saying “People can be good without god” offends people, but not as many people as “Religion is a myth.”

We can go further. Asking how many people we expect to offend still treats the issue as a binary: they’re either offended or they’re not. A better phrasing would be “How offended will people be?” Billboards reading “Religion is a myth” and “Jesus was a bastard” would both upset a lot of people – but not to the same extent.

But even this isn’t what we want to be asking. To take the final step, we need to dissolve the question away into what we actually want to know. Each time we ask “Should we be a dick in this situation?” we’re really wondering a lot of things, like:

  • Do we like the short-term and long-term reactions this will elicit?
  • Would it attract attention for our message?
  • Would it reduce the chance of persuading the target?
  • Would it help push the boundaries of the national conversation?
  • Would it damage a helpful relationship?

There isn’t an inherent property “being offensive” or “being a dick” – that’s just a heuristic, and it’s not very precise. Well, maybe I shouldn’t say just a heuristic – labeling a message as ‘offensive’ is a helpful way to talk about expected reactions. But we need to be able to step back and refocus our attention when the heuristic causes confusion.

And the heuristic IS causing confusion. Treating it as a single, inherent property leads people to miss the strategic benefits – and drawbacks – of getting people upset in different ways and contexts. Treating it as a binary question leads people to wield anger indiscriminately rather than tactically.

What we should be asking ourselves, when choosing a message, is this: “How offended do we want people to be, and offended how?”

For example, I still stand behind my support of Everybody Draw Muhammad Day – it did cause a lot of offense, but it offended people in the right way: by intentionally disregarding the Islamic demand that we respect their prophet. That was the goal – shocking people into paying more attention to a dogma which wouldn’t stand up to scrutiny.

On the other hand, I wouldn’t support using mockery in a one-on-one conversation with a creationist. When we’re trying to educate someone, a small amount of offense is useful to catch their attention – say, by openly disagreeing. But mockery is a different kind of offense, one that reduces our chances of convincing them.

Sometimes it’s easier to talk about whether or not to offend people.  But we can be so much more precise thinking about it in terms of anger, surprise, disrespect, disagreement.

They say the devil’s in the details – so we should feel right at home.

Ender Wiggin, Harry Potter, and Kurt Godel

I love finding real-life connections to my favorite fictional characters. One of the consistent criticisms I hear about Ender’s Game is that people have trouble buying into the notion that children as young as six can be so intelligent, rational, and independent. That’s also a knock against Harry in Harry Potter and the Methods of Rationality (which was clearly influenced by Ender’s Game) – he just doesn’t fit with how we expect eleven-year olds to behave. But if we accept the premise of a hyper-intelligent child, would the other traits follow?

I was reading Rebecca Goldstein’s book Incompleteness on the life and work of Kurt Gödel, and young Kurt might fit the bill. Gödel was an extremely intelligent child, far more intelligent than his parents. Goldstein thinks he made this realization as early as five, and it had a big impact on his character:

It would be comforting, in the presence of such a shattering conclusion… to derive the following additional conclusion: There are always logical explanations and I am exactly the sort of person who can discover such explanations. The grownups around me may be a sorry lot, but luckily I don’t need to depend on them. I can figure out everything for myself. The world is thoroughly logical and so is my mind – a perfect fit.

It’s been a while since I’ve read Ender’s Game, but that sounded pretty familiar – the grown ups weren’t able (his parents) or willing (the teachers) to protect him, so he had to find ways to solve problems himself.

I’ve read Harry Potter and the Methods of Rationality much more recently, and he might be a closer fit. In this version, Harry is extremely intelligent and raised by parents who love him, but are – frankly – unable to keep up. This particular passage caught my eye:

Harry nodded. “I still don’t know whether the Headmaster was joking or… the thing is, he was right in a way. I had loving parents, but I never felt like I could trust their decisions, they weren’t sane enough. I always knew that if I didn’t think things through myself, I might get hurt… Even if it’s sad, I think that’s part of the environment that creates what Dumbledore calls a hero – people who don’t have anyone else to shove final responsibility onto, and that’s why they form the mental habit of tracking everything themselves.”

Situations like Kurt Gödel’s are rare, but that’s the point of fiction. Given his example, perhaps it’s not SO big of a stretch that children who surpass their parents at such a young age would turn into an Ender Wiggin or “rational” Harry Potter.

At the very least, perhaps this connection will help people suspend their disbelief a little bit, and go read either of these fantastic works of fiction.

Reflections on pain, from the burn unit

Deep frying: even more hazardous to your health than I realized.

Yesterday marked the end of my 18-day stay in New York Presbyterian Hospital’s burn unit, where I landed after accidentally overturning a pot of hot cooking oil onto myself. I ended up with second- and third- degree burns over much of my legs, but after skin graft surgery and some physical therapy, I can walk again, albeit unsteadily, and I have skin on my legs again, albeit ugly skin.

I learned a lot during my hospital stay. Unfortunately, nearly all of that hard-earned knowledge was in very specific topics – the ideal cocktail of pills, the least-uncomfortable position to sleep in, etc. – which will neither be applicable in other contexts nor interesting to other people. But I did leave with one realization about pain, and how we experience it.

I wasn’t in constant pain for the entire 18 days, by any means, but every day featured at least a few painful experiences, from the minor (frequent shots) to the major (scraping the dead skin off the burns). I tried a handful of methods to deal with it. Deep breathing helped a bit, as did pulling my own hair. One friend suggested I try imagining myself existing at a point halfway across the room; that helped a little, but only because our philosophical argument over whether it was even possible to pull off such a mental stunt briefly distracted me from my throbbing legs.

But the one thing that did seem to dramatically affect my pain level was my belief about what was causing the pain. At one point, I was lying on my side and a nurse was pulling a bandage off of one of my burns; I couldn’t see what she was doing, but it felt like the bandage was sticking to the wound, and it was agonizing. But then she said: “Now, keep in mind, I’m just taking off the edges of the bandage here, so this is all normal skin. It just hurts because it’s like pulling tape off your skin.” And once she said that — once I started picturing tape being pulled off of normal, intact skin rather than an open wound — the pain didn’t bother me nearly as much. It really drove home to me how much of my experience of pain is psychological; if I believe the cause of the pain is something frightening or upsetting, then the pain seems much worse.

And in fact, I’d had a similar thought a few months ago, which I’d then forgotten about until the burn experience called it back to mind. I’d been carrying a heavy shopping bag on my shoulder one day, and the weight of the bag’s straps was cutting into the skin on my shoulder. But I barely noticed it. And then it occurred to me that if I had been experiencing that exact same sensation on my shoulder, in the absence of a shopping bag, it would have seemed quite painful. The fact that I knew the sensation was caused by something mundane and harmless reduced the pain so much it didn’t even register in my mind as a negative experience.

Of course, I probably can’t successfully lie to myself about what’s causing me pain, so there’s a limit to how directly useful this observation can be for managing pain in the future. But it was indirectly useful for me, because it proved to me something I’d heard but never quite believed: that the unpleasantness of pain is substantially (entirely?) psychologically constructed. A bit of subsequent reading led me to some fascinating science that underlines that conclusion – for example, the fact that the physical sensation of pain is processed by one region of the brain while the unpleasantness of that sensation is processed by another region. And the existence of a condition called pain asymbolia, in which people with certain kinds of brain damage say they’re able to feel pain but that they don’t find it the slightest bit unpleasant.

The relationship between pain and unpleasantness is a philosophically interesting one, in fact. Unpleasantness is usually considered to be built into the very definition of pain, so it’s quite confusing to talk about experiencing different levels of unpleasantness from the same level of pain. And it’s even more confusing to talk about experiencing no unpleasantness from pain, as people with pain asymbolia do. The idea feels almost as incoherent as that of being happy but not enjoying it, or doubling a number without making it any bigger.

But observing my own experiences of pain a bit more closely has made it a little easier for me to wrap my mind around the idea. I really did feel, when the nurse informed me that she was pulling the bandage off of intact skin rather than burned skin, like the pain was the same but the unpleasantness was lessened. It’s harder to imagine pain with no unpleasantness, but perhaps my shopping bag example sheds a little light: I felt the sensation of something cutting into my shoulder, but it didn’t bother me. So maybe someone with pain asymbolia would experience a cutting sensation as if they’re just carrying a heavy shopping bag, with no “Warning!” and “This is awful!” alarms going off in their mind.

I’ll have to think more about the relationship between pain and the experience of pain, because it’s still confusing to me, but at least I can feel like I got some new philosophical food for thought out of my 18 days at NY Presbyterian. Not to mention the very practical, un-philosophical lesson: don’t leave your giant pots of oil near the edge of the stove.

(ETA: I completely forgot, while writing this, that Jesse had touched on this very subject last month! Wow, Jesse — in retrospect, that’s an eerily prescient post.)

The Social Psychology of Burning Man

(Cross-posted at Scientific American’s Guest Blog.)

I just finished shaking the last of the desert dust out of the bags I brought to this year’s Burning Man, an annual week-long event in Nevada’s Black Rock Desert that takes its name from the burning of a giant effigy at the end of the week.  According to popular perception, Burning Man is a non-stop rave thrown by a bunch of drugged-out naked hippies. That’s not entirely false, admittedly, but it’s only a small piece of the picture.

Burning Man is also a large-scale social experiment. The 50,000 people who converge on the desert each year create a temporary but legitimate city – roughly the size of Santa Cruz, CA or Flagstaff, AZ — with its own street grid, laws, and social mores. In the process, they attempt to do away with several of the most fundamental institutions underlying modern civilization. Clothing, for example, is optional at Burning Man, and many people opt out of it.

Money, on the other hand, is not optional: it’s explicitly banned. People exchange goods and services constantly, but money never changes hands, except in one specially designated central tent which sells coffee and tea. I’ve heard Burning Man sometimes described as a “barter economy,” but that’s not quite right. It’s more of a “gift economy,” in which people give strangers food, drinks, clothing, massages, bike repairs, rides back to camp, and more, all without any expectation of reciprocation. Many attendees also invest a great deal of their own time and money beforehand to make other people’s experiences at Burning Man more beautiful, interesting, and comfortable, setting up tents or couches for public use or crafting elaborate art installations out in the desert for others to discover.

Read the rest at Scientific American.

Pain Research: Not Minding That It Hurts

How well can we adapt to pain in the long run? Since pain is such a source of disutility, it’s important for us to learn as much as we can about managing or reducing its impact on our lives. One researcher studying the issue is Dan Ariely, who has a rare perspective after suffering major burns at a young age. He describes some fascinating findings at the beginning of one of his TED Talks (before moving on to his research on cheating), but he devotes a whole chapter to adaptation in his recent book, The Upside of Irrationality.

I haven’t read the book quite yet, but Ariely has posted videos of himself discussing the first few chapters:



Besides being flat-out interesting, pain research could have public policy implications. The current laws tightly regulate the most effective drugs at treating chronic pain, and often discourage doctors (read: scare doctors away) from prescribing them. Earlier this year, Matt Yglesias referenced this kind of research to evaluate some of the costs and benefits of the war on drugs.

This is terrible. One of the most interesting findings from the happiness research literature is that human beings are remarkably good at adapting to all kinds of misfortunes. Chronic pain, however, is an exception. People either get effective treatment for their pain, or else they’re miserable. Adaptation is fairly minimum. The upshot is that from a real human welfare perspective, we ought to put a lot of weight on making sure that people with chronic pain get the best treatment possible. Minimizing addiction is a fine public policy goal, but the priority should be on making sure that people with legitimate needs can get medicine.

Policy decisions require us to weigh the interests of different segments of the population. If we’ve been underestimating the suffering of those in chronic pain, it might be best if we made a shift toward supporting them more and found other ways to offset our worries about addiction.

Another one of Ariely’s suggestions interested me – that events can change the associations we have with pain. I hadn’t given much thought to the dual nature of pain as a physical sensation and an emotional reaction to the sensation. I had always viewed it as a useful but necessarily unpleasant signal that someone is wrong with our bodies. Sure, it’s no fun to experience, but we need to know that we’re putting weight on a fractured bone, right? However, if it’s possible to have that physical alert without the mental anguish, we could get the best (well, the slightly better) of both worlds: notification of problems but not the accompanying distress. As Peter O’Toole said in Lawrence of Arabia: “The trick, William Potter, is not minding that it hurts.

There would be downsides, of course. Pain isn’t just an immediate reaction, it helps shape our future behavior. The emotional component to pain might be important in training ourselves to avoid harmful situations. If we “don’t mind that it hurts” we would probably be more prone to do stupid things.

At the moment, it’s fairly theoretical to me anyway. If we need to go through acute injuries to get to the tolerance Ariely has, count me out – it’s not worth it to me. But we need to understand suffering in order to reduce it, and research like Ariely’s will help.

(Sidenote: I hear Julia will have a chance to meet Dan Ariely at Burning Man this weekend. I couldn’t go because I’ll be on a business trip to Dragon*Con [I know, no sympathy for me] but I hope she has a fantastic time! I’m not envious or bitter at all… )

Lies and Debunked Legends about the Golden Ratio

In my eyes, there’s a general pecking order for named mathematical constants. Pi is at the top, e gets a good amount of attention, and Tau, like a third-party candidate, sits by itself on the fringes while its supporters tell anyone who’ll listen that it’s a credible alternative to Pi. But somewhere in the middle is Phi, also known as the Golden Ratio. It’s no superstar, but it gets its fair share of credit in geometry and culture.

I was first introduced to Phi as a kid by watching the charming video Donald in Mathmagic Land. One of the things I remembered over the years is that the Greeks used the Golden Ratio in their paintings and architecture, particularly the Parthenon. Thanks to the power of the internet, I can share this piece of my childhood with you:

How brilliant and advanced of the Greeks, right? But there’s one problem…

It’s probably not true. My faith was first shaken reading Keith Devlin’s The Unfinished Game, where he entertained a quick digression:

Two other beliefs about this particular number [Phi] are often mentioned in magazines and books: that the ancient Greeks believed it was the proportion of the rectangle the eye finds most pleasing and that they accordingly incorporated the rectangle in many of their buildings, including the famous Parthenon. These two equally persistent beliefs are likewise assuredly false and, in any case, are completely without any evidence. For one thing, tests have shown that human beings who claim to have a preference at all vary in the rectangle they find most pleasing, both from person to person and often the same person in different circumstances. Also, since the golden ratio is actually not a ratio of two whole numbers, it is impossible to construct (by measurement) a rectangle having that proportion, even in theory.

What?! Donald, I trusted you! It was tempting to tell myself that the Greeks could have found ways to approximate the ratio, and that this is just one source, and I’ve heard it so many times it must be true, and la la la I don’t want Donald to have lied to me.

But I looked into it a bit more, checking out what Mario Livio had to say about it in his book The Golden Ratio. He acknowledges that it’s a very common belief, but ultimately backed Devlin up:

The appearance of the Golden Ratio in the Parthenon was seriously questioned by University of Maine mathematician George Markowsky in his 1992 College Mathematics Journal article “Misconceptions about the Golden Ratio.” Markowsky first points out that invariably, parts of the Parthenon (e.g. the edges of the pedestal [in a provided figure]) actually fall outside the sketched Golden Rectangle, a fact totally ignored by all the Golden Ratio enthusiasts. More important, the dimensions of the Parthenon vary from source to source, probably because different reference points are used in the measurements… I am not convinced that the Parthenon has anything to do with the Golden Ratio.

So, was the Golden Ratio used in the Parthenon’s design? It is difficult to say for sure… However, this is far less certain than many books would like us to believe and is not particularly well supported by the actual dimensions of the Parthenon. [emphasis mine]

Alas, claims about the Greeks using Phi in their architecture seem overrated. Some sites bring you celebrity gossip, we bring gossip about celebrated mathematical constants. Welcome to Measure of Doubt!

Watching the video again, I can’t tell exactly how they decided where to overlay the Golden Rectangles. How much of the pedestal do we include in the rectangle? How much of the pillar? Does the waist start here, or there? It seems a bit arbitrary, as though we’re experiencing pareidolia and seeing the Golden Rectangle in everything.

Talk about disillusionment.

Self-Referential Haikus and Nerdy Math Shirts

I don’t always buy t-shirts. But when I do, I tend to make them really nerdy ones. ThinkGeek is a good source, but Snorg Tees might be my new favorite.

Self-reference, like this sentence, is hilarious.

But you can never have just one haiku. When they get out in public, they have a tendency to spawn as people are inspired to create their own. Here was my contribution to the arts:

Haiku are easy
but the ones I write devolve
into self-reference.

To which a friend responded,

Reference. Syllables?
If reference is two, I’m good.
Three? Then I am screwed.

If self-reference isn’t your cup of tea, SnorgTees also has a couple great math shirts:

That’s right: I keep it real. After I posted the picture to Facebook, a cousin commented:

It might be real…but it’s not natural.

and my dad chimed in with the brilliant:

Aren’t you the negative one.

I love my family and friends.

The darker the night, the brighter the stars?

“The darker the night, the brighter the stars” always struck me as a bit of empty cliche, the sort of thing you say when you want to console someone, or yourself, and you’re not inclined to look too hard at what you really mean. Not that it’s inherently ridiculous that your periods of pleasure might be sweeter if you have previously tasted pain. That’s quite plausible, I think. What made me roll my eyes was the implication that periods of suffering could actually make you better off, overall. That was the part that seemed like an obvious ex post facto rationalization to me. Surely the utility you gain from appreciating the good times more couldn’t possibly be outweighed by the utility you lose from the suffering itself!

Or could it? I decided to settle the question by modeling the functional relationship between suffering and happiness, making a few basic simplifying assumptions. It should look something roughly like this:

Total Happiness = [(1-S) * f(S)] – S

where*
S = % of life spent in suffering
(1-S) = % of life spent in pleasure
f(S) = some function of S

As you can see, f(S) acts as a multiplier on pleasure, so the amount of time you’ve spent in suffering affects how much happiness you get out of your time spent in pleasure. I didn’t want to assume too much about that function, but I think it’s reasonable to say the following:

  • f(S) is positive — more suffering means you get more happiness out of your pleasure
  • f(0) = 1, because if you have zero suffering, there’s no multiplier effect (and multiplying your pleasure by 1 leaves it unchanged).

… I also made one more assumption which is probably not as realistic as those two:

  •  f(S) is linear.**

Under those assumptions, f(S) can be written as:
f(S) = aS + 1

Now we can ask the question: what percent suffering (S) should we pick to maximize our total happiness? The standard way to answer “optimizing” questions like that is to take the derivative of the quantity we’re trying to maximize (in this case, Total Happiness) with respect to the variable we’re trying to choose the value of (in this case, S), and set that derivative to zero. Here, that works out to:

f'(S) – Sf'(S) – f(S) – 1 = 0

And since we’ve worked out that f(S) = aS + 1, we know that f'(S) = a, and we can plug both of those expressions into the equation above:

a – Sa – aS – 1 – 1 = 0
a – 2aS = 2
-2aS = 2 – a
2aS = a -2
S = (a – 2) / 2a

That means that the ideal value of S (i.e., the ideal % of your life spent suffering, in order to maximize your total happiness) is equal to (a – 2)/2a, where a tells you how strongly suffering magnifies your pleasure.

It might seem like this conclusion is unhelpful, since we don’t know what a is. But there is something interesting we can deduce from the result of all our hard work! Check out what happens when a gets really small or really large. As a approaches 0, the ideal S approaches negative infinity – obviously, it’s impossible to spend a negative percentage of your life suffering, but that just means you want as little suffering as possible. Not too surprising, so far; the lower a is, the less benefit you get from suffering, so the less suffering you want.

But here’s the cool part — as a approaches infinity, the ideal S approaches 1/2. That means that you never want to suffer more than half of your life, no matter how much of a multiplier effect you get from suffering – even if an hour of suffering would make your next hour of pleasure insanely wonderful, you still wouldn’t ever want to spend more time suffering than reaping the benefits of that suffering. Or, to put it in more familiar terms: Darker nights may make stars seem brighter, but you still always want your sky to be at least half-filled with stars.

* You’ll also notice I’m making two unrealistic assumptions here:

(1) I’m assuming there are only two possible states, suffering and pleasure, and that you can’t have different degrees of either one – there’s only one level of suffering and one level of pleasure.

(2) I’m ignoring the fact that it matters when the suffering occurs – e.g., if all your suffering occurs at the end of your life, there’s no way it could retroactively make you enjoy your earlier times of pleasure more. It would probably be more realistic to say that whatever the ideal amount of suffering is in your life, you would want to sprinkle it evenly throughout life because your pleasures will be boosted most strongly if you’ve suffered at least a little bit recently.

** Linearity is a decent starting point, and worth investigating, but I suspect it would be more realistic, if much more complicated, to assume that f(S) is concave, i.e., that greater amounts of suffering continue to increase the benefit you get from pleasure, but by smaller and smaller amounts.

Tales of Badass Mathematicians: Cardano

Giorlamo Cardano (Sep 24, 1501 - Sep 21, 1576): Annoying, Arrogant, Brilliant, Badass.

When people think of excitement, intrigue, and violence, they rarely think of mathematicians. That’s because they haven’t heard enough about Girolamo Cardano: 16th century Italian mathematician, physician, inventor, and general badass. This situation must be remedied.

Cardano was one of the first mathematicians to publish an autobiography, and it’s well-deserved. Not only did he have academic accomplishments, he lead a fascinating life. I was reading about how he published the first mathematical examination of probability theory when this passage in Keith Devlin’s The Unfinished Game caught my eye:

“Throughout his life, Cardano was a compulsive gambler who needed every bit of help he could find at the gambling tables, from mathematics or any other source. (And he did find other sources of help. Once, when he suspected he was being cheated at cards, he took out the knife he always carried with him and slashed his opponent’s face.)”

Let’s just say that Cardano wouldn’t have stood idly by as Roman soldiers disturbed his circles. He wasn’t particularly strong, but according to his autobiography he trained persistently and became quite the swordsman. He also boasts, “Another feat I acquired was how to snatch an unsheathed dagger, myself unarmed, from the one who held it.” Not a mathematician to mess with.

Cardano was also a talented physician. Despite his abilities, the College of Physicians in Milan rejected him – ostensibly due to his illegimate birth, but probably because he had an annoying personality (something Cardano admits to). That didn’t stop Cardano – though it wasn’t allowed, he treated patients on the side and developed a reputation as one of the best. Even as his fame grew, he couldn’t help but make enemies.

“With a client list that soon included wealthy people of influence in Milan – including some members of the college – it was surely only a matter of time before the college would be forced to admit him. But then, in 1536, still fuming at his continuing exclusion, he killed his chances by publishing a book attacking not only the college members’ medical ability but their character as well.”

Oh, as we used to say in middle school, snap. (Actually, even calling them “artificial” and “insipid” didn’t prevent Cardano from getting into the College – Devlin goes on to say that they admitted him a couple years later under pressure from supporters.)

The drama goes on and on. He got into a feud with Tartaglia, another mathematician, over whether he had promised to keep Tartaglia’s method for solving cubic equations secret. His eldest son was convicted of poisoning his cheating wife, and Cardano wasn’t able to save him from torture and execution. Then his younger son got into gambling debt and stole from Cardano, who sadly turned him over to the authorities to be banished.

Later, in what Devlin suspects was a deliberate attempt to gain notoriety, Cardano provoked the Catholic Church by publishing a horoscope for Jesus Christ and writing a book praising anti-Christian Nero. He was convicted of heresy (To add to the intrigue, Wikipedia says that “Apparently, his own son contributed to the prosecution, bribed by Tartaglia.”) After serving a few months in prison and making up with the Pope, he spent the last few years of his life writing an autobiography.

Even his death had style. Cardano died on September 21th, 1576 – the exact date he had predicted years ago. It’s believed that he committed suicide just to make sure he got the date right. What a way to go.

Calibrating our Confidence


It’s one thing to know how confident we are in our beliefs, it’s another to know how confident we should be. Sure, the de Finetti’s Game thought experiment gives us a way to put a number on our confidence – quantifying how likely we feel we are to be right. But we still need to learn to calibrate that sense of confidence with the results.  Are we appropriately confident?

Taken at face value, if we express 90% confidence 100 times, we expect to be proven wrong an average of 10 times. But very few people take the time to see whether that’s the case. We can’t trust our memories on this, as we’re probably more likely to remember our accurate predictions and forget all the offhand predictions that fell flat. If want to get an accurate sense of how well we’ve calibrated our confidence, we need a better way to track it.

Well, here’s a way: PredictionBook.com. While working on my last post, I stumbled on this nifty project. Its homepage features the words “How Sure Are You?” and “Find out just how sure you should be, and get better at being only as sure as the facts justify.” Sounds perfect, right?

It allows you to enter your prediction, how confident you are, and when the answer will be known.  When the time comes, you record whether or not you were right and it tracks your aggregate stats.  Your predictions can be private or public – if they’re public, other people can weigh in with their own confidence levels and see how accurate you’ve been.

(This site isn’t new to rationalists: Eliezer and the LessWrong community noticed it a couple years ago, and LessWrong’er Gwern has been using it to – among other things – track inTrade predictions.)

Since I don’t know who’s using the site and how, I don’t know how seriously to take the following numbers. So take this chart with a heaping dose of salt. But I’m not surprised that the confidences entered are higher than the likelihood of being right:

Predicted Certainty 50% 60% 70% 80% 90% 100% Total
Actual Certainty 37% 52% 58% 70% 79% 81%
Sample Size 350 544 561 558 709 219 2941

Sometimes the miscalibration matters more than others. In Mistakes Were Made (but not by me), Tavris and Aronson describe the overconfidence police interrogators feel about their ability to discern honest denials from false ones. In one study, researchers selected videos of police officers interviewing suspects who were denying a crime – some innocent and some guilty.

Kassin and Fong asked forty-four professional detectives in Florida and Ontario, Canada, to watch the tapes. These professionals averaged nearly fourteen years of experience each, and two-thirds had ha special training, many in the Reid Technique. Like the students [in a similar study], they did no better than chance, yet they were convinced that their accuracy rate was close to 100 percent. Their experience and training did not improve their performance. Their experience and training simply increased their belief that it did.

As a result, more people are falsely imprisoned as prosecutors steadfastly pursue convictions for people they’re sure are guilty. This is a case in which poor calibration does real harm.

Of course, it’s often a more benign issue. Since finding PredictionBook, I see everything as a prediction to be measured. A coworker and I were just discussing plans to have a group dinner, and had the following conversation (almost word for word):

Her: How to you feel about squash?”
Me: “I’m uncertain about squash…”
Her: “What about sauteed in butter and garlic?”
Me: “That has potential. My estimation of liking it just went up slightly.”
*Runs off to enter prediction*

I’ve already started making predictions in hopes that tracking my calibration errors will help me correct them. I wish Prediction Book had tags – it would be fascinating (and helpful!) to know that I’m particularly prone to misjudge whether I’ll like foods or that I’m especially well-calibrated at predicting the winner of sports games.

And yes, I will be using PredictionBook on football this season. Every week I’ll try to predict the winners and losers, and see whether my confidence is well-placed. Honestly, I expect to see some homer-bias and have too much confidence in the Ravens.  Isn’t exposing irrationality fun?