How rationality can make your life more awesome

(Cross-posted at Rationally Speaking)

Sheer intellectual curiosity was what first drew me to rationality (by which I mean, essentially, the study of how to view the world as accurately as possible). I still enjoy rationality as an end in itself, but it didn’t take me long to realize that it’s also a powerful tool for achieving pretty much anything else you care about. Below, a survey of some of the ways that rationality can make your life more awesome:

Rationality alerts you when you have a false belief that’s making you worse off.

You’ve undoubtedly got beliefs about yourself – about what kind of job would be fulfilling for you, for example, or about what kind of person would be a good match for you. You’ve also got beliefs about the world – say, about what it’s like to be rich, or about “what men want” or “what women want.” And you’ve probably internalized some fundamental maxims, such as: When it’s true love, you’ll know. You should always follow your dreams. Natural things are better. Promiscuity reduces your worth as a person.

Those beliefs shape your decisions about your career, what to do when you’re sick, what kind of people you decide to pursue romantically and how you pursue them, how much effort you should be putting into making yourself richer, or more attractive, or more skilled (and skilled in what?), more accommodating, more aggressive, and so on.

But where did these beliefs come from? The startling truth is that many of our beliefs became lodged in our psyches rather haphazardly. We’ve read them, or heard them, or picked them up from books or TV or movies, or perhaps we generalized from one or two real-life examples.

Rationality trains you to notice your beliefs, many of which you may not even be consciously aware of, and ask yourself: where did those beliefs come from, and do I have good reason to believe they’re accurate? How would I know if they’re false? Have I considered any other, alternative hypotheses?

Rationality helps you get the information you need.

Sometimes you need to figure out the answer to a question in order to make an important decision about, say, your health, or your career, or the causes that matter to you. Studying rationality reveals that some ways of investigating those questions are much more likely to yield the truth than others. Just a few examples:

“How should I run my business?” If you’re looking to launch or manage a company, you’ll have a huge leg up over your competition if you’re able to rationally determine how well your product works, or whether it meets a need, or what marketing strategies are effective.

“What career should I go into?” Before committing yourself to a career path, you’ll probably want to learn about the experiences of people working in that field. But a rationalist also knows to ask herself, “Is my sample biased?” If you’re focused on a few famous success stories from the field, that doesn’t tell you very much about what a typical job is like, or what your odds are of making it in that field.

It’s also an unfortunate truth that not every field uses reliable methods, and so not every field produces true or useful work. If that matters to you, you’ll need the tools of rationality to evaluate the fields you’re considering working in. Fields whose methods are controversial include psychotherapy, nutrition science, economics, sociology, consulting, string theory, and alternative medicine.

“How can I help the world?” Many people invest huge amounts of money, time, and effort in causes they care about. But if you want to ensure that your investment makes a difference, you need to be able to evaluate the relevant evidence. How serious of a problem is, say, climate change, or animal welfare, or globalization? How effective is lobbying, or marching, or boycotting? How far do your contributions go at charity X versus charity Y?

Rationality shows you how to evaluate advice.

Learning about rationality, and how widespread irrationality is, sparks an important realization: You can’t assume other people have good reasons for the things they believe. And that means you need to know how to evaluate other people’s opinions, not just based on how plausible their opinions seem, but based on the reliability of the methods they used to form those opinions.

So when you get business advice, you need to ask yourself: What evidence does she have for that advice, and are her circumstances relevant enough to mine? The same is true when a friend swears by some particular remedy for acne, or migraines, or cancer. Is he repeating a recommendation made by multiple doctors? Or did he try it once and get better? What kind of evidence is reliable?

In many cases, people can’t articulate exactly how they’ve arrived at a particular belief; it’s just the product of various experiences they’ve had and things they’ve heard or read. But once you’ve studied rationality, you’ll recognize the signs of people who are more likely to have accurate beliefs: People who adjust their level of confidence to the evidence for a claim; people who actually change their minds when presented with new evidence; people who seem interested in getting the right answer rather than in defending their own egos.

Rationality saves you from bad decisions.

Knowing about the heuristics your brain uses and how they can go wrong means you can escape some very common, and often very serious, decision-making traps.

For example, people often stick with their original career path or business plan for years after the evidence has made clear that it was a mistake, because they don’t want their previous investment to be wasted. That’s thanks to the sunk cost fallacy. Relatedly, people often allow cognitive dissonance to convince them that things aren’t so bad, because the prospect of changing course is too upsetting.

And in many major life decisions, such as choosing a career, people envision one way things could play out (“I’m going to run my own lab, and live in a big city…”) – but they don’t spend much time thinking about how probable that outcome is, or what the other probable outcomes are. The narrative fallacy is that situations imagined in high detail seem more plausible, regardless of how probable they actually are.

Rationality trains you to step back from your emotions so that they don’t cloud your judgment.

Depression, anxiety, anger, envy, and other unpleasant and self-destructive emotions tend to be fueled by what cognitive therapy calls “cognitive distortions,” irrationalities in your thinking such as jumping to conclusions based on limited evidence; focusing selectively on negatives; all-or-nothing thinking; and blaming yourself, or someone else, without reason.

Rationality breaks your habit of automatically trusting your instinctive, emotional judgments, encouraging you instead to notice the beliefs underlying your emotions and ask yourself whether those beliefs are justified.

It also trains you to notice when your beliefs about the world are being colored by what you want, or don’t want, to be true. Beliefs about your own abilities, about the motives of other people, about the likely consequences of your behavior, about what happens after you die, can be emotionally fraught. But a solid training in rationality keeps you from flinching away from the truth – about your situation, or yourself — when learning the truth can help you change it.

The Straw Vulcan: Hollywood’s illogical approach to logical decisionmaking

I gave a talk at Skepticon IV last weekend about Vulcans and why they’re a terrible example of rationality. I go through five principles of Straw Vulcan Rationality(TM), give examples from Star Trek and from real life, and explain why they’re mistaken:

  1. Being rational means expecting everyone else to be rational too.
  2. Being rational means you should never make a decision until you have all the information.
  3. Being rational means never relying on intuition.
  4. Being rational means eschewing emotion.
  5. Being rational means valuing only quantifiable things — like money, productivity, or efficiency.

In retrospect, I would’ve streamlined the presentation more, but I’m happy with the content —  I think it’s an important and under-appreciated topic. The main downside was just that everyone wanted to talk to me afterwards, not about rationality, but about Star Trek. I don’t know the answer to your obscure trivia questions, Trekkies!

 

UPDATE: I’m adding my diagrams of the Straw Vulcan model of ideal decisionmaking, and my proposed revisions to it, since those slides don’t appear in the video:

The Straw Vulcan view of the relationship between rationality and emotion.

After my revisions.

How Should Rationalists Approach Death?

“How Should Rationalists Approach Death?” That’s the title of the panel I’m moderating this weekend at Skepticon, and I couldn’t be more excited. It’s a big topic – we won’t figure it all out in an hour, but I know we’ll get people to think. Do common beliefs about death make sense? How can we find comfort about our mortality? Should we try to find comfort about death? What should society be doing about death?

I managed to get 4 fantastic panelists, all of whom I respect and admire:

  • Greta Christina is author, blogger, speaker extraordinaire. Her writing has appeared in multiple magazines and newspapers, including Ms., Penthouse, Chicago Sun-Times, On Our Backs, and Skeptical Inquirer. I’ve been thrilled to see her becoming a well-known and respected voice in the secular community. She delivered the keynote address at the Secular Student Alliance’s 2010 Conference, and has been on speaking tours around the country.
  • James Croft is a candidate for an Ed.D at Harvard and works with the Humanist Chaplaincy at Harvard. I had the pleasure of meeting James two years ago at American Humanist Association conference, where we talked and argued for hours. Eloquent, gracious, and sharp, he’s a great model of intellectual engagement. He’s able to disagree agreeably, but also change his mind when the occasion calls for it.
  • Eliezer Yudkowsky co-founded the nonprofit Singularity Institute for Artificial Intelligence (SIAI), where he works as a full-time Research Fellow. He’s written must-read essays on Bayes’ Theorem and human rationality as well as great works of fiction. Have you heard me rave about Harry Potter and the Methods of Rationality? That’s him. His writings, especially on the community blog LessWrong, have influenced my thinking quite a bit.
  • And some lady named Julia Galef, who apparently writes a pretty cool blog with her brother, Jesse.

To give you a taste of what to expect, I chose two passages about finding hope in death – one from Greta, the other from Eliezer.

Greta:

But we can find ways to frame reality — including the reality of death — that make it easier to deal with. We can find ways to frame reality that do not ignore or deny it and that still give us comfort and solace, meaning and hope. And we can offer these ways of framing reality to people who are considering atheism but have been taught to see it as inevitably frightening, empty, and hopeless.

And I’m genuinely puzzled by atheists who are trying to undercut that.

Eliezer:

I wonder at the strength of non-transhumanist atheists, to accept so terrible a darkness without any hope of changing it. But then most atheists also succumb to comforting lies, and make excuses for death even less defensible than the outright lies of religion. They flinch away, refuse to confront the horror of a hundred and fifty thousand sentient beings annihilated every day. One point eight lives per second, fifty-five million lives per year. Convert the units, time to life, life to time. The World Trade Center killed half an hour. As of today, all cryonics organizations together have suspended one minute. This essay took twenty thousand lives to write. I wonder if there was ever an atheist who accepted the full horror, making no excuses, offering no consolations, who did not also hope for some future dawn. What must it be like to live in this world, seeing it just the way it is, and think that it will never change, never get any better?

If you’re coming to Skepticon – and you should, it’s free! – you need to be there for this panel.

The Penrose Triangle of Beliefs

For a long time, I didn’t think it was truly possible to believe contradictory things at the same time. In retrospect, the model I was using of belief-formation was roughly: when people decide that they believe some new claim, it’s because they’ve compared it to their pre-existing beliefs and found no contradictions. Of course, that may be how an ideal-reasoning Artificial Intelligence would build up its set of beliefs*, but we’re not ideal reasoners. It’s not at all difficult to find contradictions in most anyone’s belief set. Just for example,

  1. “The Bible is the word of God.”
  2. “The Bible says you’ll go to hell if you don’t accept Jesus.”
  3. “My atheist friends aren’t going to hell, they’re good people.”

Or – to take an example I’ve witnessed many times:

  1. “The reason it’s not okay to have sex with animals is because they can’t consent to it.”
  2. “Animals can’t consent to being killed and eaten.”
  3. “It’s fine to kill and eat animals.”

Anyway, despite the fact that I had overwhelming evidence demonstrating that, yes, people are quite capable of believing contradictory things, I was having a hard time understanding how they did it. Until I took another look at an old optical illusion called the Penrose Triangle.

The Penrose Triangle

If you look at the whole triangle at once, you can’t see it in enough detail to notice its impossibility. All you can do, at that zoomed-out level, is get a sense of whether it looks, roughly, like a plausible object. And it does.

Alternatively, you can look closely at one part of the picture at a time. Then you can actually check the details of the picture to make sure they make sense, rather than relying on the vague “feels plausible” kind of examination you did at the zoomed-out holistic level. But the catch is that in order to scrutinize the picture in detail, you have to zoom in to one subset of the picture at a time – and each corner of the triangle, on its own, is perfectly consistent.

And I think that the Penrose triangle is an apt visual metaphor for what contradictory beliefs must look like in our heads. We don’t notice the contradictions in our beliefs because we either “examine” our beliefs at the zoomed-out level (e.g., asking ourselves, “Do my beliefs about God make sense?” and concluding “Yes” because no obvious contradictions jump out at us in response to that query)… or we examine our beliefs in detail, but only a couple at a time. And so we never notice that our beliefs form an impossible object.

*(Well, to be more precise, you’d probably want an ideal-reasoning AI to assign degrees of belief, or credence, to claims, rather than a binary “believe”/”disbelieve.” So then the way to avoid contradictions would be to prohibit your AI from assigning credence in a way that violated the laws of probability. So, for example, it would never assign 99% probability to A being true and 99% probability to B being true, but only 5% probability to “A and B being true.”)

Calibrating our Confidence


It’s one thing to know how confident we are in our beliefs, it’s another to know how confident we should be. Sure, the de Finetti’s Game thought experiment gives us a way to put a number on our confidence – quantifying how likely we feel we are to be right. But we still need to learn to calibrate that sense of confidence with the results.  Are we appropriately confident?

Taken at face value, if we express 90% confidence 100 times, we expect to be proven wrong an average of 10 times. But very few people take the time to see whether that’s the case. We can’t trust our memories on this, as we’re probably more likely to remember our accurate predictions and forget all the offhand predictions that fell flat. If want to get an accurate sense of how well we’ve calibrated our confidence, we need a better way to track it.

Well, here’s a way: PredictionBook.com. While working on my last post, I stumbled on this nifty project. Its homepage features the words “How Sure Are You?” and “Find out just how sure you should be, and get better at being only as sure as the facts justify.” Sounds perfect, right?

It allows you to enter your prediction, how confident you are, and when the answer will be known.  When the time comes, you record whether or not you were right and it tracks your aggregate stats.  Your predictions can be private or public – if they’re public, other people can weigh in with their own confidence levels and see how accurate you’ve been.

(This site isn’t new to rationalists: Eliezer and the LessWrong community noticed it a couple years ago, and LessWrong’er Gwern has been using it to – among other things – track inTrade predictions.)

Since I don’t know who’s using the site and how, I don’t know how seriously to take the following numbers. So take this chart with a heaping dose of salt. But I’m not surprised that the confidences entered are higher than the likelihood of being right:

Predicted Certainty 50% 60% 70% 80% 90% 100% Total
Actual Certainty 37% 52% 58% 70% 79% 81%
Sample Size 350 544 561 558 709 219 2941

Sometimes the miscalibration matters more than others. In Mistakes Were Made (but not by me), Tavris and Aronson describe the overconfidence police interrogators feel about their ability to discern honest denials from false ones. In one study, researchers selected videos of police officers interviewing suspects who were denying a crime – some innocent and some guilty.

Kassin and Fong asked forty-four professional detectives in Florida and Ontario, Canada, to watch the tapes. These professionals averaged nearly fourteen years of experience each, and two-thirds had ha special training, many in the Reid Technique. Like the students [in a similar study], they did no better than chance, yet they were convinced that their accuracy rate was close to 100 percent. Their experience and training did not improve their performance. Their experience and training simply increased their belief that it did.

As a result, more people are falsely imprisoned as prosecutors steadfastly pursue convictions for people they’re sure are guilty. This is a case in which poor calibration does real harm.

Of course, it’s often a more benign issue. Since finding PredictionBook, I see everything as a prediction to be measured. A coworker and I were just discussing plans to have a group dinner, and had the following conversation (almost word for word):

Her: How to you feel about squash?”
Me: “I’m uncertain about squash…”
Her: “What about sauteed in butter and garlic?”
Me: “That has potential. My estimation of liking it just went up slightly.”
*Runs off to enter prediction*

I’ve already started making predictions in hopes that tracking my calibration errors will help me correct them. I wish Prediction Book had tags – it would be fascinating (and helpful!) to know that I’m particularly prone to misjudge whether I’ll like foods or that I’m especially well-calibrated at predicting the winner of sports games.

And yes, I will be using PredictionBook on football this season. Every week I’ll try to predict the winners and losers, and see whether my confidence is well-placed. Honestly, I expect to see some homer-bias and have too much confidence in the Ravens.  Isn’t exposing irrationality fun?

De Finetti’s Game: How to Quantify Belief

What do people really mean when they say they’re “sure” of something? Everyday language is terrible at describing actual levels of confidence – it lumps together different degrees of belief into vague groups which don’t always match from person to person. When one friend tells you she’s “pretty sure” we should turn left and another says he’s “fairly certain” we should turn right, it would be useful to know how confident they each are.

Sometimes it’s enough to hear your landlord say she’s pretty sure you’ll get towed from that parking space – you’d move your car. But when you’re basing an important decision on another person’s advice, it would be better describe confidence on an objective, numeric scale. It’s not necessarily easy to quantify a feeling, but there’s a method that can help.

Bruno de Finetti, a 20th-century Italian mathematician, came up with a creative idea called de Finetti’s Game to help connect the feeling of confidence to a percent (hat tip Keith Devlin in The Unfinished Game). It works like this:


Suppose you’re half a mile into a road trip when your friend tells you that he’s “pretty sure” he locked the door. Do you go back? When you ask him for a specific number, he replies breezily that he’s 95% sure. Use that number as a starting point and begin the thought experiment.

In the experiment, you show your friend a bag with 95 red and 5 blue marbles. You then offer him a choice: he can either pick a marble at random and, if it’s red, win $1 million. Or he can go back and verify that the door is locked and, if it is, get $1 million.

If your friend would choose to draw a marble from the bag, he preferred the 95% chance to win. His real confidence of locking the door must be somewhere below that. So you play another round – this time with 80 red and 20 blue marbles. If he would rather check the door this time, his confidence is higher than 80% and perhaps you try a 87/13 split next round.

And so on. You keep offering different deals in order to hone in on the level where he feels equally comfortable selecting a random marble and checking the door. That’s his real level of confidence.


The thought experiment should guide people through the tricky process of connecting their feeling of confidence to a corresponding percent. The answer will still be somewhat fuzzy – after all, we’re still relying on a feeling that one option is better than another.

It’s important to remember that the game doesn’t tell us how likely we are to BE right. It only tells us about our confidence – which can be misplaced. From cognitive dissonance to confirmation bias there are countless psychological influences messing up the calibration between our confidence level and our chance of being right. But the more we pay attention to the impact of those biases, the more we can do to compensate. It’s a good practice (though pretty rare) to stop and think, “Have I really been as accurate as I would expect, given how confident I feel?”

I love the idea of measuring people’s confidence (and not just because I can rephrase it as measuring their doubt). I just love being able to quantify things! We can quantify exactly how much a new piece of evidence is likely to affect jurors, how much a person’s suit affects their persuasive impact, or how much confidence affects our openness to new ideas.

We could even use de Finetti’s Game to watch the inner workings of our minds doing Bayesian updating. Maybe I’ll try it out on myself to see how confident I feel that the Ravens will win the Superbowl this year before and after the Week 1 game against the rival Pittsburgh Steelers. I expect that my feeling of confidence won’t shift quite in accordance with what the Bayesian analysis tells me a fully rational person would believe. It’ll be fun to see just how irrational I am!

Loaded words

It’s striking how much more unreasonable you can make someone sound, simply by quoting them in a certain tone of voice. I’ve noticed this when I’m listening to someone describe a fight or other incident in which he felt that someone was being rude to him — he’ll relate a comment that the person made (e.g, “And then she was like, ‘Sure, whatever…'”) And when he quotes the person, he uses a sarcastic or cutting tone of voice, so of course I think, “Wow, that person was being obnoxious!”

But then I wonder: Can I really be confident that he’s accurately representing the tone of her comment? It’s pretty easy for someone to, intentionally or not, distort the tone of a comment they’re recounting, while still accurately quoting the person’s official words. Especially if they’re already annoyed at that person. So to be fair, I try to replay the comment in my head in a more neutral tone to see if that makes it seem less obnoxious. (“Sure, whatever” could be said in a detached, I-don’t-have-a-strong-opinion-about-this way, just as easily as it could be said in a sarcastic or dismissive way.)

And there’s an analogy to this phenomenon in print. Journalists and bloggers can cast a totally different light on someone’s quote just through the word choice they use when they refer to the quote. Compare, for example, “Hillary objected that…” to “Hillary complained that…” The former makes her sound controlled and rational; the latter makes her sound whiny. The writer can exert an impressive amount of influence over your reaction to the quote, without ever being accused of misrepresenting what Hillary said.

One insidious example of a loaded word that I’ve found to be quite common is “admitted.” It’s loaded because it implies that whatever content follows reflects poorly on the speaker, and that he would have preferred to conceal it if he could. Take these recent examples:

“In his blog, Volokh admitted that his argument ‘was a joke’” (Wikipedia)
“Bill O’Reilly Admits That His TV Persona Is Just An Act” (Inquisitr)
“Paul Krugman admitted that the zero bound was not actually a bound” (Free Exchange)
Kennedy admitted that ‘the end result’ under his standard ‘may be the same as that suggested by the dissent…’” (Basman Rose Law)

I investigated the original quotes from the people whom these sentences allude to, and as far as I can tell, they gave no sense that they thought what they were saying was shameful or unflattering. The word “admitted” could just as easily have been replaced by something else. For example:

“In his blog, Volokh clarified that his argument ‘was a joke.’”
“Bill O’Reilly Says That His TV Persona Is Just An Act”
“Paul Krugman explained that the zero bound was not actually a bound”
“Kennedy acknowledged that ‘the end result’ under his standard ‘may be the same as that suggested by the dissent…’”

Suddenly all the speakers sound stronger and more confident, right? This isn’t to say that the word “admitted” is never appropriate. Sometimes it clearly fits, such as when someone is agreeing that some particular point does, in fact, weaken the argument she is trying to advance. But in most cases, reading that someone “admitted” some point can subtly make you think more poorly of him without reason. That’s why I advise being on the lookout for this and other loaded words, and when you notice them, try mentally replacing them with something more neutral so that you can focus on the point itself without your judgment being inadvertently skewed.

Bayesian truth serum

Here’s a sneaky trick for extracting the truth from someone even when she’s trying to conceal it from you: Rather than asking her how she thinks or behaves, ask her how she thinks other people think or behave.

MIT professor of psychology and cognitive science Drazen Pelec calls this trick “Bayesian truth serum,” according to Tyler Cowen in Discover Your Inner Economist. The logic behind it is simple: our impressions of “typical” attitudes and behavior are colored by our own attitudes and behavior. And that’s reasonable. You should count yourself as one data point in your sample of “how people think and behave.”

Your own data is likely to influence your sample more strongly than other data points, however, for two reasons. First, because it’s much more salient to you, compared to your data about other people, so you’re more likely to overweight it in your estimation. And second, through a ripple effect — people tend to cluster with other people who think and act similarly to themselves, so however your sample differs from the general population, that’s an indicator of how you yourself differ from the general population.

So, to use Cowen’s example, if you ask a man how many sexual partners he’s had, he might have a strong incentive to lie, either downplaying or exaggerating his history depending on who you are and what he wants you to think of him. But his estimate of a “typical” number will still be influenced by his own, and by that of his friends and acquaintances (who, because of the selection effect, are probably more similar to him than the general population is). “When we talk about other people,” Cowen writes, “we are often talking about ourselves, whether we know it or not.”

Asking for reassurance: a Bayesian interpretation

Bayesianism gives us a prescription for how we should update our beliefs about the world as we encounter new evidence. Roughly speaking, when you encounter new evidence (E), you should increase your confidence in a hypothesis H only if that evidence would’ve been more likely to occur in a world where H was true than in a world in which H was false — that is, if P(E|H) > P(E|not-H).

I think this is indisputably correct. What I’ve been less sure about is whether Bayesianism tends to lead to conclusions that we wouldn’t have arrived at anyway just through common sense. I mean, isn’t this how we react to evidence intuitively? Does knowing about Bayes’ rule actually improve our reasoning in everyday life?

As of yesterday, I can say: yes, it does.

I was complaining to a friend about people who ask questions like, “Do you think I’m pretty?” or “Do you really like me?” My argument was that I understood the impulse to seek reassurance if you’re feeling insecure, but I didn’t think it was useful to actually ask such a question, since the person’s just going to tell you “yes” no matter what, and you’re not going to get any new information from it. (And you’re going to make yourself look bad by asking.)

My friend made the valid point that even if everyone always responds “Yes,” some people are better at lying than others, so if the person’s reply sounds unconvincing, that’s a telltale sign that that they don’t genuinely like you/ think you’re pretty. “Okay, that’s true,” I replied. “But if they reply ‘yes’ and it sounds convincing, then you haven’t learned any new information, because you have no way of knowing whether he’s telling the truth or whether he’s just a good liar.”

But then I thought about Bayes’ rule and realized I was wrong — even a convincing-sounding “yes” gives you some new information. In this case, H = “He thinks I’m pretty” and E = “He gave a convincing-sounding ‘yes’ to my question.” And I think it’s safe to assume that it’s easier to sound convincing if you believe what you’re saying than if you don’t, which means that P(E | H) > P(E | not-H). So a proper Bayesian reasoner encountering E should increase her credence in H.

(Of course, there’s always the risk, as with Heisenberg’s Uncertainty Principle, that the process of measuring something will actually change it. So if you ask “Do you like me?” enough, the true answer might shift from “yes” to “no”…)

Does it make sense to play the lottery?

A very smart friend of mine told me yesterday that he buys a lottery ticket every week. I’ve encountered other smart people who do the same, and it always confused me. If you’re aware (as these people all are) that the odds are stacked against you, then why do you play?

I raised this question on Facebook recently and got some insightful replies. One economist pointed out that money is different from utility (i.e., happiness, or well-being), and that it’s perfectly legitimate to get disproportionately more utility out of a jackpot win than you lose from buying a ticket.

So for example, let’s say a ticket costs $1 and gives you a 1 in 10 million chance of winning $8 million. Then the expected monetary value of the ticket equals $8 million/ 10 million – $1 = negative $0.20. But what if you get 15 million units of utility (utils) from $8 million, and you only sacrifice one util from losing $1? In that case, the expected utility value of the ticket equals 15 million utils / 10 million – 1 util = 0.5 utils. Which means buying the ticket is a smart move, because it’ll increase your utility.

I’m sympathetic to that argument in theory — it’s true that we shouldn’t assume that there’s a one-to-one relationship between utility and money, and that someone could hypothetically have a utility curve that makes it a good deal for them to play the lottery. But in practice, the relationship between money and utility tends to be disproportionate in the opposite direction from the example above, in that the more dollars you have, the less utility you get out of each additional dollar. So the $8 million you would gain if you won the lottery will probably give you less than 8 million times as much utility as the $1 that you’re considering spending on the ticket. Which would make the ticket a bad purchase even in terms of expected utility, not just in terms of expected money.

Setting that theoretical argument aside, the most common actual response I get from smart people who play the lottery is that they’re buying a fantasy aid. Purchasing the ticket allows them to daydream about what it would be like to win $8 million, and the experience of daydreaming itself gives them enough utility to make up for the expected monetary loss. “Why can’t you just daydream without paying the $1?” I always ask. “Because it’s not as satisfying if I know that I have no chance of winning,” they reply.  Essentially, they don’t care how small their chance of winning is, they just need to know that they have some non-zero chance at the jackpot in order to be able to daydream about it.

I used to accept that argument. But in talking with my friend yesterday, it occurred to me that it’s not true that your chances of winning a fortune are zero without a lottery ticket. For example, you could suddenly discover that you have a long-lost wealthy aunt who died and bequeathed her mansion to you. Or you could find a diamond ring in the gutter. Or, for that matter, you could find a winning lottery ticket in the gutter. The probability of any of these events happening isn’t very high, of course, but it is non-zero.

So you simply can’t say that the lottery ticket is worth the money because it increases your chances of becoming rich from zero to non-zero. All it’s really doing is increasing your chances of becoming rich from extremely tiny to very tiny. And if all you need to enable your Scrooge McDuck daydreams is the knowledge that they have a non-zero chance of coming true, then you can keep those daydreams and your ticket money too.
%d bloggers like this: