Being a Dick is not Binary

(crossposted at Friendly Atheist)

“Should we be offensive?” is a common question in the secular movement. It’s also the wrong question.

The title of this post comes from Phil Plait’s “Don’t be a Dick” talk at TAM 8, which sparked conversation about the wisdom of offending people in the cause of critical thinking. Though it generated the most attention, it’s not the first time we’ve asked these questions: Should we condemn people for opposing LGBT rights? Mock people for believing in creationism? Call religion a delusion? Sometimes it seems like everything we do offends people – even the simple act of advertising our existence offended Iowa Governor Chet Culver.

In the face of that, it’s almost liberating, isn’t it? If everything we do is offensive, it doesn’t matter anymore – we can stop worrying about it. In fact, I used to argue that myself! When confronted with accusations that Everybody Draw Muhammad Day was offensive, I’d point to the bus ads and billboards and say, “People get offended at the most mundane things. We can’t let that hold us back.”

But offensiveness not a simple yes-or-no issue. Like Julia wrote a few months ago, it’s tempting to treat belief as a black and white matter. It’s not – we can hold beliefs with differing degrees of confidence, and if we treat it otherwise we lose a lot of power to make distinctions, see nuance, and chart the best course of action. It’s the same with asking whether or not to be offensive. We need to add nuance.

At the first level, it’s probably more helpful to phrase the question “How many people are my actions likely to offend?” Not all offensive statements are equal. Sure, saying “People can be good without god” offends people, but not as many people as “Religion is a myth.”

We can go further. Asking how many people we expect to offend still treats the issue as a binary: they’re either offended or they’re not. A better phrasing would be “How offended will people be?” Billboards reading “Religion is a myth” and “Jesus was a bastard” would both upset a lot of people – but not to the same extent.

But even this isn’t what we want to be asking. To take the final step, we need to dissolve the question away into what we actually want to know. Each time we ask “Should we be a dick in this situation?” we’re really wondering a lot of things, like:

  • Do we like the short-term and long-term reactions this will elicit?
  • Would it attract attention for our message?
  • Would it reduce the chance of persuading the target?
  • Would it help push the boundaries of the national conversation?
  • Would it damage a helpful relationship?

There isn’t an inherent property “being offensive” or “being a dick” – that’s just a heuristic, and it’s not very precise. Well, maybe I shouldn’t say just a heuristic – labeling a message as ‘offensive’ is a helpful way to talk about expected reactions. But we need to be able to step back and refocus our attention when the heuristic causes confusion.

And the heuristic IS causing confusion. Treating it as a single, inherent property leads people to miss the strategic benefits – and drawbacks – of getting people upset in different ways and contexts. Treating it as a binary question leads people to wield anger indiscriminately rather than tactically.

What we should be asking ourselves, when choosing a message, is this: “How offended do we want people to be, and offended how?”

For example, I still stand behind my support of Everybody Draw Muhammad Day – it did cause a lot of offense, but it offended people in the right way: by intentionally disregarding the Islamic demand that we respect their prophet. That was the goal – shocking people into paying more attention to a dogma which wouldn’t stand up to scrutiny.

On the other hand, I wouldn’t support using mockery in a one-on-one conversation with a creationist. When we’re trying to educate someone, a small amount of offense is useful to catch their attention – say, by openly disagreeing. But mockery is a different kind of offense, one that reduces our chances of convincing them.

Sometimes it’s easier to talk about whether or not to offend people.  But we can be so much more precise thinking about it in terms of anger, surprise, disrespect, disagreement.

They say the devil’s in the details – so we should feel right at home.

Ender Wiggin, Harry Potter, and Kurt Godel

I love finding real-life connections to my favorite fictional characters. One of the consistent criticisms I hear about Ender’s Game is that people have trouble buying into the notion that children as young as six can be so intelligent, rational, and independent. That’s also a knock against Harry in Harry Potter and the Methods of Rationality (which was clearly influenced by Ender’s Game) – he just doesn’t fit with how we expect eleven-year olds to behave. But if we accept the premise of a hyper-intelligent child, would the other traits follow?

I was reading Rebecca Goldstein’s book Incompleteness on the life and work of Kurt Gödel, and young Kurt might fit the bill. Gödel was an extremely intelligent child, far more intelligent than his parents. Goldstein thinks he made this realization as early as five, and it had a big impact on his character:

It would be comforting, in the presence of such a shattering conclusion… to derive the following additional conclusion: There are always logical explanations and I am exactly the sort of person who can discover such explanations. The grownups around me may be a sorry lot, but luckily I don’t need to depend on them. I can figure out everything for myself. The world is thoroughly logical and so is my mind – a perfect fit.

It’s been a while since I’ve read Ender’s Game, but that sounded pretty familiar – the grown ups weren’t able (his parents) or willing (the teachers) to protect him, so he had to find ways to solve problems himself.

I’ve read Harry Potter and the Methods of Rationality much more recently, and he might be a closer fit. In this version, Harry is extremely intelligent and raised by parents who love him, but are – frankly – unable to keep up. This particular passage caught my eye:

Harry nodded. “I still don’t know whether the Headmaster was joking or… the thing is, he was right in a way. I had loving parents, but I never felt like I could trust their decisions, they weren’t sane enough. I always knew that if I didn’t think things through myself, I might get hurt… Even if it’s sad, I think that’s part of the environment that creates what Dumbledore calls a hero – people who don’t have anyone else to shove final responsibility onto, and that’s why they form the mental habit of tracking everything themselves.”

Situations like Kurt Gödel’s are rare, but that’s the point of fiction. Given his example, perhaps it’s not SO big of a stretch that children who surpass their parents at such a young age would turn into an Ender Wiggin or “rational” Harry Potter.

At the very least, perhaps this connection will help people suspend their disbelief a little bit, and go read either of these fantastic works of fiction.

Reflections on pain, from the burn unit

Deep frying: even more hazardous to your health than I realized.

Yesterday marked the end of my 18-day stay in New York Presbyterian Hospital’s burn unit, where I landed after accidentally overturning a pot of hot cooking oil onto myself. I ended up with second- and third- degree burns over much of my legs, but after skin graft surgery and some physical therapy, I can walk again, albeit unsteadily, and I have skin on my legs again, albeit ugly skin.

I learned a lot during my hospital stay. Unfortunately, nearly all of that hard-earned knowledge was in very specific topics – the ideal cocktail of pills, the least-uncomfortable position to sleep in, etc. – which will neither be applicable in other contexts nor interesting to other people. But I did leave with one realization about pain, and how we experience it.

I wasn’t in constant pain for the entire 18 days, by any means, but every day featured at least a few painful experiences, from the minor (frequent shots) to the major (scraping the dead skin off the burns). I tried a handful of methods to deal with it. Deep breathing helped a bit, as did pulling my own hair. One friend suggested I try imagining myself existing at a point halfway across the room; that helped a little, but only because our philosophical argument over whether it was even possible to pull off such a mental stunt briefly distracted me from my throbbing legs.

But the one thing that did seem to dramatically affect my pain level was my belief about what was causing the pain. At one point, I was lying on my side and a nurse was pulling a bandage off of one of my burns; I couldn’t see what she was doing, but it felt like the bandage was sticking to the wound, and it was agonizing. But then she said: “Now, keep in mind, I’m just taking off the edges of the bandage here, so this is all normal skin. It just hurts because it’s like pulling tape off your skin.” And once she said that — once I started picturing tape being pulled off of normal, intact skin rather than an open wound — the pain didn’t bother me nearly as much. It really drove home to me how much of my experience of pain is psychological; if I believe the cause of the pain is something frightening or upsetting, then the pain seems much worse.

And in fact, I’d had a similar thought a few months ago, which I’d then forgotten about until the burn experience called it back to mind. I’d been carrying a heavy shopping bag on my shoulder one day, and the weight of the bag’s straps was cutting into the skin on my shoulder. But I barely noticed it. And then it occurred to me that if I had been experiencing that exact same sensation on my shoulder, in the absence of a shopping bag, it would have seemed quite painful. The fact that I knew the sensation was caused by something mundane and harmless reduced the pain so much it didn’t even register in my mind as a negative experience.

Of course, I probably can’t successfully lie to myself about what’s causing me pain, so there’s a limit to how directly useful this observation can be for managing pain in the future. But it was indirectly useful for me, because it proved to me something I’d heard but never quite believed: that the unpleasantness of pain is substantially (entirely?) psychologically constructed. A bit of subsequent reading led me to some fascinating science that underlines that conclusion – for example, the fact that the physical sensation of pain is processed by one region of the brain while the unpleasantness of that sensation is processed by another region. And the existence of a condition called pain asymbolia, in which people with certain kinds of brain damage say they’re able to feel pain but that they don’t find it the slightest bit unpleasant.

The relationship between pain and unpleasantness is a philosophically interesting one, in fact. Unpleasantness is usually considered to be built into the very definition of pain, so it’s quite confusing to talk about experiencing different levels of unpleasantness from the same level of pain. And it’s even more confusing to talk about experiencing no unpleasantness from pain, as people with pain asymbolia do. The idea feels almost as incoherent as that of being happy but not enjoying it, or doubling a number without making it any bigger.

But observing my own experiences of pain a bit more closely has made it a little easier for me to wrap my mind around the idea. I really did feel, when the nurse informed me that she was pulling the bandage off of intact skin rather than burned skin, like the pain was the same but the unpleasantness was lessened. It’s harder to imagine pain with no unpleasantness, but perhaps my shopping bag example sheds a little light: I felt the sensation of something cutting into my shoulder, but it didn’t bother me. So maybe someone with pain asymbolia would experience a cutting sensation as if they’re just carrying a heavy shopping bag, with no “Warning!” and “This is awful!” alarms going off in their mind.

I’ll have to think more about the relationship between pain and the experience of pain, because it’s still confusing to me, but at least I can feel like I got some new philosophical food for thought out of my 18 days at NY Presbyterian. Not to mention the very practical, un-philosophical lesson: don’t leave your giant pots of oil near the edge of the stove.

(ETA: I completely forgot, while writing this, that Jesse had touched on this very subject last month! Wow, Jesse — in retrospect, that’s an eerily prescient post.)

The Social Psychology of Burning Man

(Cross-posted at Scientific American’s Guest Blog.)

I just finished shaking the last of the desert dust out of the bags I brought to this year’s Burning Man, an annual week-long event in Nevada’s Black Rock Desert that takes its name from the burning of a giant effigy at the end of the week.  According to popular perception, Burning Man is a non-stop rave thrown by a bunch of drugged-out naked hippies. That’s not entirely false, admittedly, but it’s only a small piece of the picture.

Burning Man is also a large-scale social experiment. The 50,000 people who converge on the desert each year create a temporary but legitimate city – roughly the size of Santa Cruz, CA or Flagstaff, AZ — with its own street grid, laws, and social mores. In the process, they attempt to do away with several of the most fundamental institutions underlying modern civilization. Clothing, for example, is optional at Burning Man, and many people opt out of it.

Money, on the other hand, is not optional: it’s explicitly banned. People exchange goods and services constantly, but money never changes hands, except in one specially designated central tent which sells coffee and tea. I’ve heard Burning Man sometimes described as a “barter economy,” but that’s not quite right. It’s more of a “gift economy,” in which people give strangers food, drinks, clothing, massages, bike repairs, rides back to camp, and more, all without any expectation of reciprocation. Many attendees also invest a great deal of their own time and money beforehand to make other people’s experiences at Burning Man more beautiful, interesting, and comfortable, setting up tents or couches for public use or crafting elaborate art installations out in the desert for others to discover.

Read the rest at Scientific American.

Pain Research: Not Minding That It Hurts

How well can we adapt to pain in the long run? Since pain is such a source of disutility, it’s important for us to learn as much as we can about managing or reducing its impact on our lives. One researcher studying the issue is Dan Ariely, who has a rare perspective after suffering major burns at a young age. He describes some fascinating findings at the beginning of one of his TED Talks (before moving on to his research on cheating), but he devotes a whole chapter to adaptation in his recent book, The Upside of Irrationality.

I haven’t read the book quite yet, but Ariely has posted videos of himself discussing the first few chapters:



Besides being flat-out interesting, pain research could have public policy implications. The current laws tightly regulate the most effective drugs at treating chronic pain, and often discourage doctors (read: scare doctors away) from prescribing them. Earlier this year, Matt Yglesias referenced this kind of research to evaluate some of the costs and benefits of the war on drugs.

This is terrible. One of the most interesting findings from the happiness research literature is that human beings are remarkably good at adapting to all kinds of misfortunes. Chronic pain, however, is an exception. People either get effective treatment for their pain, or else they’re miserable. Adaptation is fairly minimum. The upshot is that from a real human welfare perspective, we ought to put a lot of weight on making sure that people with chronic pain get the best treatment possible. Minimizing addiction is a fine public policy goal, but the priority should be on making sure that people with legitimate needs can get medicine.

Policy decisions require us to weigh the interests of different segments of the population. If we’ve been underestimating the suffering of those in chronic pain, it might be best if we made a shift toward supporting them more and found other ways to offset our worries about addiction.

Another one of Ariely’s suggestions interested me – that events can change the associations we have with pain. I hadn’t given much thought to the dual nature of pain as a physical sensation and an emotional reaction to the sensation. I had always viewed it as a useful but necessarily unpleasant signal that someone is wrong with our bodies. Sure, it’s no fun to experience, but we need to know that we’re putting weight on a fractured bone, right? However, if it’s possible to have that physical alert without the mental anguish, we could get the best (well, the slightly better) of both worlds: notification of problems but not the accompanying distress. As Peter O’Toole said in Lawrence of Arabia: “The trick, William Potter, is not minding that it hurts.

There would be downsides, of course. Pain isn’t just an immediate reaction, it helps shape our future behavior. The emotional component to pain might be important in training ourselves to avoid harmful situations. If we “don’t mind that it hurts” we would probably be more prone to do stupid things.

At the moment, it’s fairly theoretical to me anyway. If we need to go through acute injuries to get to the tolerance Ariely has, count me out – it’s not worth it to me. But we need to understand suffering in order to reduce it, and research like Ariely’s will help.

(Sidenote: I hear Julia will have a chance to meet Dan Ariely at Burning Man this weekend. I couldn’t go because I’ll be on a business trip to Dragon*Con [I know, no sympathy for me] but I hope she has a fantastic time! I’m not envious or bitter at all… )

The darker the night, the brighter the stars?

“The darker the night, the brighter the stars” always struck me as a bit of empty cliche, the sort of thing you say when you want to console someone, or yourself, and you’re not inclined to look too hard at what you really mean. Not that it’s inherently ridiculous that your periods of pleasure might be sweeter if you have previously tasted pain. That’s quite plausible, I think. What made me roll my eyes was the implication that periods of suffering could actually make you better off, overall. That was the part that seemed like an obvious ex post facto rationalization to me. Surely the utility you gain from appreciating the good times more couldn’t possibly be outweighed by the utility you lose from the suffering itself!

Or could it? I decided to settle the question by modeling the functional relationship between suffering and happiness, making a few basic simplifying assumptions. It should look something roughly like this:

Total Happiness = [(1-S) * f(S)] – S

where*
S = % of life spent in suffering
(1-S) = % of life spent in pleasure
f(S) = some function of S

As you can see, f(S) acts as a multiplier on pleasure, so the amount of time you’ve spent in suffering affects how much happiness you get out of your time spent in pleasure. I didn’t want to assume too much about that function, but I think it’s reasonable to say the following:

  • f(S) is positive — more suffering means you get more happiness out of your pleasure
  • f(0) = 1, because if you have zero suffering, there’s no multiplier effect (and multiplying your pleasure by 1 leaves it unchanged).

… I also made one more assumption which is probably not as realistic as those two:

  •  f(S) is linear.**

Under those assumptions, f(S) can be written as:
f(S) = aS + 1

Now we can ask the question: what percent suffering (S) should we pick to maximize our total happiness? The standard way to answer “optimizing” questions like that is to take the derivative of the quantity we’re trying to maximize (in this case, Total Happiness) with respect to the variable we’re trying to choose the value of (in this case, S), and set that derivative to zero. Here, that works out to:

f'(S) – Sf'(S) – f(S) – 1 = 0

And since we’ve worked out that f(S) = aS + 1, we know that f'(S) = a, and we can plug both of those expressions into the equation above:

a – Sa – aS – 1 – 1 = 0
a – 2aS = 2
-2aS = 2 – a
2aS = a -2
S = (a – 2) / 2a

That means that the ideal value of S (i.e., the ideal % of your life spent suffering, in order to maximize your total happiness) is equal to (a – 2)/2a, where a tells you how strongly suffering magnifies your pleasure.

It might seem like this conclusion is unhelpful, since we don’t know what a is. But there is something interesting we can deduce from the result of all our hard work! Check out what happens when a gets really small or really large. As a approaches 0, the ideal S approaches negative infinity – obviously, it’s impossible to spend a negative percentage of your life suffering, but that just means you want as little suffering as possible. Not too surprising, so far; the lower a is, the less benefit you get from suffering, so the less suffering you want.

But here’s the cool part — as a approaches infinity, the ideal S approaches 1/2. That means that you never want to suffer more than half of your life, no matter how much of a multiplier effect you get from suffering – even if an hour of suffering would make your next hour of pleasure insanely wonderful, you still wouldn’t ever want to spend more time suffering than reaping the benefits of that suffering. Or, to put it in more familiar terms: Darker nights may make stars seem brighter, but you still always want your sky to be at least half-filled with stars.

* You’ll also notice I’m making two unrealistic assumptions here:

(1) I’m assuming there are only two possible states, suffering and pleasure, and that you can’t have different degrees of either one – there’s only one level of suffering and one level of pleasure.

(2) I’m ignoring the fact that it matters when the suffering occurs – e.g., if all your suffering occurs at the end of your life, there’s no way it could retroactively make you enjoy your earlier times of pleasure more. It would probably be more realistic to say that whatever the ideal amount of suffering is in your life, you would want to sprinkle it evenly throughout life because your pleasures will be boosted most strongly if you’ve suffered at least a little bit recently.

** Linearity is a decent starting point, and worth investigating, but I suspect it would be more realistic, if much more complicated, to assume that f(S) is concave, i.e., that greater amounts of suffering continue to increase the benefit you get from pleasure, but by smaller and smaller amounts.

Calibrating our Confidence


It’s one thing to know how confident we are in our beliefs, it’s another to know how confident we should be. Sure, the de Finetti’s Game thought experiment gives us a way to put a number on our confidence – quantifying how likely we feel we are to be right. But we still need to learn to calibrate that sense of confidence with the results.  Are we appropriately confident?

Taken at face value, if we express 90% confidence 100 times, we expect to be proven wrong an average of 10 times. But very few people take the time to see whether that’s the case. We can’t trust our memories on this, as we’re probably more likely to remember our accurate predictions and forget all the offhand predictions that fell flat. If want to get an accurate sense of how well we’ve calibrated our confidence, we need a better way to track it.

Well, here’s a way: PredictionBook.com. While working on my last post, I stumbled on this nifty project. Its homepage features the words “How Sure Are You?” and “Find out just how sure you should be, and get better at being only as sure as the facts justify.” Sounds perfect, right?

It allows you to enter your prediction, how confident you are, and when the answer will be known.  When the time comes, you record whether or not you were right and it tracks your aggregate stats.  Your predictions can be private or public – if they’re public, other people can weigh in with their own confidence levels and see how accurate you’ve been.

(This site isn’t new to rationalists: Eliezer and the LessWrong community noticed it a couple years ago, and LessWrong’er Gwern has been using it to – among other things – track inTrade predictions.)

Since I don’t know who’s using the site and how, I don’t know how seriously to take the following numbers. So take this chart with a heaping dose of salt. But I’m not surprised that the confidences entered are higher than the likelihood of being right:

Predicted Certainty 50% 60% 70% 80% 90% 100% Total
Actual Certainty 37% 52% 58% 70% 79% 81%
Sample Size 350 544 561 558 709 219 2941

Sometimes the miscalibration matters more than others. In Mistakes Were Made (but not by me), Tavris and Aronson describe the overconfidence police interrogators feel about their ability to discern honest denials from false ones. In one study, researchers selected videos of police officers interviewing suspects who were denying a crime – some innocent and some guilty.

Kassin and Fong asked forty-four professional detectives in Florida and Ontario, Canada, to watch the tapes. These professionals averaged nearly fourteen years of experience each, and two-thirds had ha special training, many in the Reid Technique. Like the students [in a similar study], they did no better than chance, yet they were convinced that their accuracy rate was close to 100 percent. Their experience and training did not improve their performance. Their experience and training simply increased their belief that it did.

As a result, more people are falsely imprisoned as prosecutors steadfastly pursue convictions for people they’re sure are guilty. This is a case in which poor calibration does real harm.

Of course, it’s often a more benign issue. Since finding PredictionBook, I see everything as a prediction to be measured. A coworker and I were just discussing plans to have a group dinner, and had the following conversation (almost word for word):

Her: How to you feel about squash?”
Me: “I’m uncertain about squash…”
Her: “What about sauteed in butter and garlic?”
Me: “That has potential. My estimation of liking it just went up slightly.”
*Runs off to enter prediction*

I’ve already started making predictions in hopes that tracking my calibration errors will help me correct them. I wish Prediction Book had tags – it would be fascinating (and helpful!) to know that I’m particularly prone to misjudge whether I’ll like foods or that I’m especially well-calibrated at predicting the winner of sports games.

And yes, I will be using PredictionBook on football this season. Every week I’ll try to predict the winners and losers, and see whether my confidence is well-placed. Honestly, I expect to see some homer-bias and have too much confidence in the Ravens.  Isn’t exposing irrationality fun?

De Finetti’s Game: How to Quantify Belief

What do people really mean when they say they’re “sure” of something? Everyday language is terrible at describing actual levels of confidence – it lumps together different degrees of belief into vague groups which don’t always match from person to person. When one friend tells you she’s “pretty sure” we should turn left and another says he’s “fairly certain” we should turn right, it would be useful to know how confident they each are.

Sometimes it’s enough to hear your landlord say she’s pretty sure you’ll get towed from that parking space – you’d move your car. But when you’re basing an important decision on another person’s advice, it would be better describe confidence on an objective, numeric scale. It’s not necessarily easy to quantify a feeling, but there’s a method that can help.

Bruno de Finetti, a 20th-century Italian mathematician, came up with a creative idea called de Finetti’s Game to help connect the feeling of confidence to a percent (hat tip Keith Devlin in The Unfinished Game). It works like this:


Suppose you’re half a mile into a road trip when your friend tells you that he’s “pretty sure” he locked the door. Do you go back? When you ask him for a specific number, he replies breezily that he’s 95% sure. Use that number as a starting point and begin the thought experiment.

In the experiment, you show your friend a bag with 95 red and 5 blue marbles. You then offer him a choice: he can either pick a marble at random and, if it’s red, win $1 million. Or he can go back and verify that the door is locked and, if it is, get $1 million.

If your friend would choose to draw a marble from the bag, he preferred the 95% chance to win. His real confidence of locking the door must be somewhere below that. So you play another round – this time with 80 red and 20 blue marbles. If he would rather check the door this time, his confidence is higher than 80% and perhaps you try a 87/13 split next round.

And so on. You keep offering different deals in order to hone in on the level where he feels equally comfortable selecting a random marble and checking the door. That’s his real level of confidence.


The thought experiment should guide people through the tricky process of connecting their feeling of confidence to a corresponding percent. The answer will still be somewhat fuzzy – after all, we’re still relying on a feeling that one option is better than another.

It’s important to remember that the game doesn’t tell us how likely we are to BE right. It only tells us about our confidence – which can be misplaced. From cognitive dissonance to confirmation bias there are countless psychological influences messing up the calibration between our confidence level and our chance of being right. But the more we pay attention to the impact of those biases, the more we can do to compensate. It’s a good practice (though pretty rare) to stop and think, “Have I really been as accurate as I would expect, given how confident I feel?”

I love the idea of measuring people’s confidence (and not just because I can rephrase it as measuring their doubt). I just love being able to quantify things! We can quantify exactly how much a new piece of evidence is likely to affect jurors, how much a person’s suit affects their persuasive impact, or how much confidence affects our openness to new ideas.

We could even use de Finetti’s Game to watch the inner workings of our minds doing Bayesian updating. Maybe I’ll try it out on myself to see how confident I feel that the Ravens will win the Superbowl this year before and after the Week 1 game against the rival Pittsburgh Steelers. I expect that my feeling of confidence won’t shift quite in accordance with what the Bayesian analysis tells me a fully rational person would believe. It’ll be fun to see just how irrational I am!

RS#37: The science and philosophy of happiness

On Episode #37 of the Rationally Speaking podcast, Massimo and I talk about the science and philosophy of happiness:

“Debates over what’s important to happiness — Money? Children? Love? Achievement? — are ancient and universal, but attempts to study the subject empirically are much newer. What have psychologists learned about which factors have a strong effect on people’s happiness and which don’t? Are parents really less happy than non-parents, and do people return to their happiness “set point” even after extreme events like winning the lottery or becoming paralyzed? We also tackle some of the philosophical questions regarding happiness, such as whether some kinds of happiness are “better” than others, and whether people can be mistaken about their own happiness. But, perhaps the hardest question is: can happiness really be measured?”

Bayesian truth serum

Here’s a sneaky trick for extracting the truth from someone even when she’s trying to conceal it from you: Rather than asking her how she thinks or behaves, ask her how she thinks other people think or behave.

MIT professor of psychology and cognitive science Drazen Pelec calls this trick “Bayesian truth serum,” according to Tyler Cowen in Discover Your Inner Economist. The logic behind it is simple: our impressions of “typical” attitudes and behavior are colored by our own attitudes and behavior. And that’s reasonable. You should count yourself as one data point in your sample of “how people think and behave.”

Your own data is likely to influence your sample more strongly than other data points, however, for two reasons. First, because it’s much more salient to you, compared to your data about other people, so you’re more likely to overweight it in your estimation. And second, through a ripple effect — people tend to cluster with other people who think and act similarly to themselves, so however your sample differs from the general population, that’s an indicator of how you yourself differ from the general population.

So, to use Cowen’s example, if you ask a man how many sexual partners he’s had, he might have a strong incentive to lie, either downplaying or exaggerating his history depending on who you are and what he wants you to think of him. But his estimate of a “typical” number will still be influenced by his own, and by that of his friends and acquaintances (who, because of the selection effect, are probably more similar to him than the general population is). “When we talk about other people,” Cowen writes, “we are often talking about ourselves, whether we know it or not.”

%d bloggers like this: