How Should Rationalists Approach Death?

“How Should Rationalists Approach Death?” That’s the title of the panel I’m moderating this weekend at Skepticon, and I couldn’t be more excited. It’s a big topic – we won’t figure it all out in an hour, but I know we’ll get people to think. Do common beliefs about death make sense? How can we find comfort about our mortality? Should we try to find comfort about death? What should society be doing about death?

I managed to get 4 fantastic panelists, all of whom I respect and admire:

  • Greta Christina is author, blogger, speaker extraordinaire. Her writing has appeared in multiple magazines and newspapers, including Ms., Penthouse, Chicago Sun-Times, On Our Backs, and Skeptical Inquirer. I’ve been thrilled to see her becoming a well-known and respected voice in the secular community. She delivered the keynote address at the Secular Student Alliance’s 2010 Conference, and has been on speaking tours around the country.
  • James Croft is a candidate for an Ed.D at Harvard and works with the Humanist Chaplaincy at Harvard. I had the pleasure of meeting James two years ago at American Humanist Association conference, where we talked and argued for hours. Eloquent, gracious, and sharp, he’s a great model of intellectual engagement. He’s able to disagree agreeably, but also change his mind when the occasion calls for it.
  • Eliezer Yudkowsky co-founded the nonprofit Singularity Institute for Artificial Intelligence (SIAI), where he works as a full-time Research Fellow. He’s written must-read essays on Bayes’ Theorem and human rationality as well as great works of fiction. Have you heard me rave about Harry Potter and the Methods of Rationality? That’s him. His writings, especially on the community blog LessWrong, have influenced my thinking quite a bit.
  • And some lady named Julia Galef, who apparently writes a pretty cool blog with her brother, Jesse.

To give you a taste of what to expect, I chose two passages about finding hope in death – one from Greta, the other from Eliezer.

Greta:

But we can find ways to frame reality — including the reality of death — that make it easier to deal with. We can find ways to frame reality that do not ignore or deny it and that still give us comfort and solace, meaning and hope. And we can offer these ways of framing reality to people who are considering atheism but have been taught to see it as inevitably frightening, empty, and hopeless.

And I’m genuinely puzzled by atheists who are trying to undercut that.

Eliezer:

I wonder at the strength of non-transhumanist atheists, to accept so terrible a darkness without any hope of changing it. But then most atheists also succumb to comforting lies, and make excuses for death even less defensible than the outright lies of religion. They flinch away, refuse to confront the horror of a hundred and fifty thousand sentient beings annihilated every day. One point eight lives per second, fifty-five million lives per year. Convert the units, time to life, life to time. The World Trade Center killed half an hour. As of today, all cryonics organizations together have suspended one minute. This essay took twenty thousand lives to write. I wonder if there was ever an atheist who accepted the full horror, making no excuses, offering no consolations, who did not also hope for some future dawn. What must it be like to live in this world, seeing it just the way it is, and think that it will never change, never get any better?

If you’re coming to Skepticon – and you should, it’s free! – you need to be there for this panel.

What do philosophers think about intuition?

Earlier this year I complained, on Rationally Speaking, about the fact that so many philosophers think it’s sufficient to back up their arguments by citing “intuition.” It’s a tricky term to pin down, but generally philosophers cite intuition when they think something is “clearly true” but can’t demonstrate it with logic or evidence. So, for example, philosophers of ethics will often claim that things are “good” or “bad” by citing their intuition. And philosophers of mind will cite their intuitions to argue that certain things would or wouldn’t be conscious (for example, David Chalmers relies on intuition to argue for the theoretical possibility of “philosophical zombies,” creatures that would act and respond exactly like conscious human beings, but which wouldn’t be conscious).

I cited many examples, not only of philosophers using intuitions as evidence, but of philosophers acknowledging that appeals to intuition are ubiquitous in the field. (“Intuitions often play the role that observation does in science – they are data that must be explained, confirmers or the falsifiers of theories,” wrote one philosopher.) That’s worrisome, to me, because the whole point of philosophy is allegedly to figure out whether our intuitive judgments make sense. It’s also worrisome to me because intuitions vary sharply from person to person; for example, I don’t agree at all with G. E. Moore’s argument that it is intuitively obvious that it’s “better” to have a planet full of sunsets and waterfalls than one with filth, even if no one ever gets to see that planet. (He may prefer a universe that contains Planet Waterfall to one that contains Planet Filthy, but I don’t think that makes the former objectively “better.”)

In the comment thread under his response-post, Massimo objected that intuitions are not, in fact, widespread in philosophy. “Julia, a list of cherry picked citations an argument doesn’t make,” he wrote, and he asked me if I had randomly polled philosophers. I hadn’t, of course.

But I recently came across two people who did. Kuntz & Kuntz’s “Surveying Philosophers About Philosophical Intuition,” from the March issue of the Review of Philosophy and Psychology, surveyed 282 academic philosophers and found that 51% of them thought that intuitions are “useful to justification in philosophical methods.”

Because the term “intuition” is so nebulous, the researchers also presented their survey respondents with a list of some of the more common ways of defining intuition, and asked them to rank how apt they thought the definitions were. The top two “most apt” definitions of intuition were the following:

  1. “Judgment that is not made on the basis of some kind of observable and explicit reasoning process”
  2. “An intellectual happening whereby it seems that something is the case without arising from reasoning, or sensorial perceiving, or remembering.”

The survey also shed light on one reason why Massimo, a philosopher of science, might have underestimated the prevalence of appeals to intuition in philosophy as a whole: “In regard to the usefulness of intuitions to justification, our results also revealed that philosophers of science expressed significantly lower agreement than philosophers doing metaphysics, epistemology, ethics, and philosophy of mind,” Kuntz and Kuntz wrote. That squares with my experience, too — most of the philosophy of science I’ve read has been grounded in logic, math, and evidence.

Another important side point the researchers make is there’s more than one way to use your intuitions. Philosophers certainly do use them as justification for claims, but they also use intuitions to generate claims which they then justify using more rigorous methods like logic and evidence. 83% of survey respondents agreed that intuitions are useful in that latter way, and I agree too — I have no problem with people using intuition to generate possible ideas, I just have a problem with people saying “This feels intuitively true to me, so it must be true.”

A Sleeping Beauty paradox

Imagine that one Sunday afternoon, Sleeping Beauty is taking part in a mysterious science experiment. The experimenter tells her:

“I’m going to put you to sleep tonight, and wake you up on Monday. Then, out of your sight, I’m going to flip a fair coin. If it lands Heads, I will send you home. If it lands Tails, I’ll put you back to sleep and wake you up again on Tuesday, and then send you home. But I will also, if the coin lands Tails, administer a drug to you while you’re sleeping that will erase your memory of waking up on Monday.”

So when she wakes up, she doesn’t know what day it is, but she does know that the possibilities are:

  • It’s Monday, and the coin will land either Heads or Tails.
  • It’s Tuesday, and the coin landed Tails.

We can rewrite the possibilities as:

  • Heads, Monday
  • Tails, Monday
  • Tails, Tuesday

I’d argue that since it’s a fair coin, you should place 1/2 probability on the coin being Heads and 1/2 on the coin being  Tails. So the probability on (Heads, Monday) should be 1/2. I’d also argue that since Tails means she wakes up once on Monday and once on Tuesday, and since those two wakings are indistinguishable from each other, you should split the remaining 1/2 probability evenly between (Tails, Monday) and (Tails, Tuesday). So you end up with:

  • Heads, Monday  (P = 1/2)
  • Tails, Monday (P = 1/4)
  • Tails, Tuesday  (P = 1/4)

So, is that the answer? It seems indisputable, right? Not so fast. There’s something troubling about this result. To see what it is, imagine that Beauty is told, upon waking, that it’s Monday. Given that information, what probability should she assign to the coin landing Heads? Well, if you look at the probabilities we’ve assigned to the three scenarios, you’ll see that conditional on it being Monday, Heads is twice as likely as Tails. And why is that so troubling? Because the coin hasn’t been flipped yet. How can Beauty claim that a fair coin is twice as likely to come up Heads as Tails?

Can you figure out what’s wrong with the reasoning in this post?

RS #47: The Search for Extra-Terrestrial Intelligence

In the latest episode of Rationally Speaking, Massimo and I spar about SETI, the Search for Extra-Terrestrial Intelligence: Is it a “scientific” endeavor? Is it worth maintaining? How would we find intelligent alien life, if it’s out there?

My favorite parts of this episode are the ones in which we’re debating how likely it is that intelligent alien life exists. Massimo’s opinion is essentially that we have no way to answer the question; I’m less pessimistic. There are a number of scientific facts which I think should raise or lower our estimates of the prevalence of intelligent alien life. And what about the fact of our own existence? Does that provide any evidence we can use to reason about the likelihood of our ever encountering other intelligent life? It’s a very tricky question, fraught as it is with unresolved philosophical problems in probability theory, but a fascinating one.

RS #47: The Search for Extra-Terrestrial Intelligence

The Penrose Triangle of Beliefs

For a long time, I didn’t think it was truly possible to believe contradictory things at the same time. In retrospect, the model I was using of belief-formation was roughly: when people decide that they believe some new claim, it’s because they’ve compared it to their pre-existing beliefs and found no contradictions. Of course, that may be how an ideal-reasoning Artificial Intelligence would build up its set of beliefs*, but we’re not ideal reasoners. It’s not at all difficult to find contradictions in most anyone’s belief set. Just for example,

  1. “The Bible is the word of God.”
  2. “The Bible says you’ll go to hell if you don’t accept Jesus.”
  3. “My atheist friends aren’t going to hell, they’re good people.”

Or – to take an example I’ve witnessed many times:

  1. “The reason it’s not okay to have sex with animals is because they can’t consent to it.”
  2. “Animals can’t consent to being killed and eaten.”
  3. “It’s fine to kill and eat animals.”

Anyway, despite the fact that I had overwhelming evidence demonstrating that, yes, people are quite capable of believing contradictory things, I was having a hard time understanding how they did it. Until I took another look at an old optical illusion called the Penrose Triangle.

The Penrose Triangle

If you look at the whole triangle at once, you can’t see it in enough detail to notice its impossibility. All you can do, at that zoomed-out level, is get a sense of whether it looks, roughly, like a plausible object. And it does.

Alternatively, you can look closely at one part of the picture at a time. Then you can actually check the details of the picture to make sure they make sense, rather than relying on the vague “feels plausible” kind of examination you did at the zoomed-out holistic level. But the catch is that in order to scrutinize the picture in detail, you have to zoom in to one subset of the picture at a time – and each corner of the triangle, on its own, is perfectly consistent.

And I think that the Penrose triangle is an apt visual metaphor for what contradictory beliefs must look like in our heads. We don’t notice the contradictions in our beliefs because we either “examine” our beliefs at the zoomed-out level (e.g., asking ourselves, “Do my beliefs about God make sense?” and concluding “Yes” because no obvious contradictions jump out at us in response to that query)… or we examine our beliefs in detail, but only a couple at a time. And so we never notice that our beliefs form an impossible object.

*(Well, to be more precise, you’d probably want an ideal-reasoning AI to assign degrees of belief, or credence, to claims, rather than a binary “believe”/”disbelieve.” So then the way to avoid contradictions would be to prohibit your AI from assigning credence in a way that violated the laws of probability. So, for example, it would never assign 99% probability to A being true and 99% probability to B being true, but only 5% probability to “A and B being true.”)

Easy Math Puzzle – Or is it?

How good are you at basic math? Can you solve this simple logic puzzle? Here, give it a go and let me know how long it took you to answer:

Got it yet?

It looks easier than it is. The options are presented beautifully to cause maximum mental confusion.

As my dad put it, the answer depends on the answer. If the answer is 60%, it’s 25%. If the answer is 25% it’s 50%. If the answer is 50% it’s 25%. There’s an endless loop with no correct answer.

Don’t lose sleep, I “found” an answer, it was hidden: [edited for clarity]

Yes, I photoshopped this. I’m either cheating or engaging in outside-the-box thinking. Sometimes it’s tough to tell the difference.

My preferred set of answers would be:

  • A) 25%
  • B) 50%
  • C) 75%
  • D) 50%

Though I’m tempted to throw a “0%” in for good measure…

(Puzzle via PostSecret by way of Spencer of Ask a Mathematician/Ask a Physicist)

[Edited for clarity]

Why this Meme Exploded

[cross-posted on Friendly Atheist]

Somehow, it went viral. In just 24 hours, the Secular Student Alliance (my organization)’s Facebook page exploded from 6,500 supporters’ “likes” to 18,000. I found myself thinking, “How the hell did that happen?” And then thinking, “Hmm… how can we do it again?”

The whole thing started with Kenny Flagg, one of our group leaders with the Freethinkers of UND. After noticing that the SSA’s Facebook presence was much smaller than Campus Crusade for Christ’s, he wanted to make a difference. He “grabbed both profile pictures for the groups, added the stats from each page, and threw in a quick meme for good measure.” Then he posted it on Reddit. That was it. Take a look – would you would expect it to inspire a frenzy of activity?


Yes, this is the image that launched a thousand clicks. Well, several thousand, actually.

I had to figure out why a simple picture like this inspired such a big reaction. The more I thought about it, the more psychology and rhetorical communication techniques I saw present. Kenny:

  1. Demonstrated insider status
  2. Invoked tribal/patriotic feelings, and
  3. Gave people direction.

Well look at that. In classic style, he hit the three branches of rhetoric: Ethos, Pathos, and Logos.

Kenny’s Insider Status (Ethos)

Kenny was a perfect person for the task. If my coworkers or I had been the ones to post, we would seem self-serving. Kenny, not being an SSA employee, comes across as a more objective voice. Do you trust the used car salesman or the blue book to tell you a car’s value? We tend to trust people more if they share our interest – and we trust them less if we suspect they’re looking out for themselves.

A great way to gain people’s trust is by proving that you’re a member of their community. Sharing group identity acts as a proxy for sharing values. The “Challenge accepted” meme accomplished that beautifully. It’s like using slang – it reinforces your status as an insider. Redditors heard the message: “I’m one of you.” He put that to good use.

Our Tribal Emotions (Pathos)

After establishing his credibility as an insider, Kenny appealed to an incredibly powerful emotion to get them to act: group loyalty. When groups of people get compared to their rivals, it creates an us-versus-them mentality. The competition angle rallied atheists on Reddit into a stronger, more unified group.

And the more atheist redditors rallied together, the stronger the social proof dynamic became. When we’re in a group, we tend to watch other people for cues about how to behave. As redditors saw other people commenting, upvoting the post, and liking the SSA’s page, it influenced their behavior. People got the impression: “This is what it atheists on Reddit are doing.” As part of that group, they felt moved to behave the same way.

Kenny’s post inspired group pride, anger at cultural opponents, and the desire to fit in – emotions that motivate us to act. But that motivation needed direction.

Giving a Direction (Logos)

Have you ever felt that you wanted to make a difference, but just didn’t know how to do it? Without direction, all that energy just sputters out. Telling people to “eat healthier” is overwhelming and vague, but saying “switch to 1% milk” is specific and helpful.

Kenny gave everyone a simple, concrete task: go click “like” on the Secular Student Alliance’s page. He had everyone share his big vision: to get the Secular Student Alliance as many “likes” as the Campus Crusade for Christ page. He even provided a link to the SSA’s Facebook page. The direction was clear.

It all fit together.

Can we do this again?

We never know for sure whether a meme will explode.

But we’ll be more likely to go viral if we pay attention to what works. If you’re interested, I recommend Chip and Dan Heath’s books Made to Stick and Switch. Kenny managed to use psychology techniques without meaning to, but we can be more deliberate with our efforts. (Be careful fostering us-versus-them feelings. Competition is all well and good, but actual hostility is dangerous.)

There might seem like a lot of it boils down to luck. But as Richard Wiseman found, capitalizing on “luck” is really a skill. The Secular Student Alliance prepared by generating student leaders who were enthusiastic to help us out. When we spotted the opportunity we posted like madmen, and even  hosted an “Ask Us Anything” to interact with the community. And yes, Kenny did a fantastic job.

For such a quick image, it had a lot going for it. It’s not exactly Cicero orating in the Roman Senate, but it was damn good rhetoric in its own way. Forget a thousand words, that picture was worth 12,000 Facebook fans.

What is “objectification,” and what’s wrong with it?

I was pleased to discover that one of my favorite bloggers, Luke Muehlhauser, had recently tackled a topic that’s been on my mind too: what do people mean when they talk about men “objectifying” women, and why exactly is it a bad thing? As per usual with Luke’s posts, it’s a clear-headed and thoughtful analysis, and it’s obvious that he isn’t trying to attack anyone — just genuinely trying to parse the concept and determine the degree to which it makes sense.

Luke lists several typical ways people define “objectification,” most of which center around the idea of treating another person as a means to an end, without being conscious of their feelings and goals and preferences. I’ve always felt this is an odd definition for two reasons, both of which Luke raises: First, it seems like an incomplete definition, in that there are many cases that match that definition perfectly but which no one would call instances of objectification (Luke has a clever photographic example).

And second, if objectification is “using someone as a means to an end,” it isn’t clear why objectification is inherently bad, even though the word typically carries a strong connotation of condemnation. After all, we all use each other as means to an end all the time! When I buy a cup of coffee, I’m treating the barista as a means to the end of getting a cup of coffee. I’m not really thinking about his feelings or goals — and I don’t think he expects or particularly wants me to be.

Of course, if not-thinking about someone’s feelings means that you harm him (like if I were rude to the barista) then it’s easy to see why that’s bad. But the proper conclusion from that fact is “harming people is bad,” not “objectification is bad.” It’s certainly possible to use someone as a means to an end without harming him, and so it’s still not clear why objectification per se is bad.

At least, that’s the form my argument typically took until yesterday. I thought about it a bit more after reading Luke’s analysis, and concluded that I had been missing part of the picture. So to the extent that I’m now sympathetic to arguments against objectification, it’s for this reason:

Objectification’s not necessarily a problem at the individual level. When Person A uses Person B as a means to an end, as long as B’s not being harmed, then it’s ethically unproblematic (at least for us utilitarian-minded folks). The tricky thing is that when you have a lot of A’s systematically treating a lot of B’s as a means to an end in the same kind of way, it can start to become a problem. Because at that scale, it can affect the way A’s and B’s think about each other — people’s attitudes are influenced by the way the people around them think and act. So it can have this self-reinforcing ripple effect that ends up stifling other kinds of interactions and relationships that many A’s and B’s would’ve found fulfilling.

So, that’s my current theory. It’s the best I can do at reconciling the facts that (1) I’m not at all bothered by the idea of a particular man being interested in a particular woman only for sex, and (2) I hate the idea of a society in which most men are only interested in women for sex (and I think such a society would be seriously sub-optimal for both men and women).*

I think this is a very under-appreciated aspect of the objectification debate. I also think it poses interesting problems for utilitarian ethics; how do you assign blame in situations where any single person doing X is harmless, but many people doing X is harmful? It’s somewhat akin to problems like pollution, where each individual actor can truthfully argue, “Given that everyone else is polluting, it’s not going to make any difference if I do it too.”

And with objectification, not only do you have the fact that no single person’s actions are going to measurably change the overall culture, you also have the fact that the overall culture is partly to blame for each individual’s actions. And all of the individuals’ actions, in turn, are to blame for the overall culture. The circularity makes it especially tricky to figure out the degree to which any individual actor deserves blame for his actions.

*Of course, men objectifying women isn’t the only kind of objectification; you could fill in any gender in place of either “men” or “women.” I just used that pairing because it’s the typical one in these discussions, but my argument isn’t actually gender-dependent.

Overcoming The Curse of Knowledge

[crossposted at LessWrong]

What is the Curse of Knowledge, and how does it apply to science education, persuasion, and communication? No, it’s not a reference to the Garden of Eden story. I’m referring to a particular psychological phenomenon that can make our messages backfire if we’re not careful.

Communication isn’t a solo activity; it involves both you and the audience. Writing a diary entry is a great way to sort out thoughts, but if you want to be informative and persuasive to others, you need to figure out what they’ll understand and be persuaded by. A common habit is to use ourselves as a mental model – assuming that everyone else will laugh at what we find funny, agree with what we find convincing, and interpret words the way we use them. The model works to an extent – especially with people similar to us – but other times our efforts fall flat. You can present the best argument you’ve ever heard, only to have it fall on dumb – sorry, deaf – ears.

That’s not necessarily your fault – maybe they’re just dense! Maybe the argument is brilliant! But if we want to communicate successfully, pointing fingers and assigning blame is irrelevant. What matters is getting our point across, and we can’t do it if we’re stuck in our head, unable to see things from our audience’s perspective. We need to figure out what words will work.

Unfortunately, that’s where the Curse of Knowledge comes in. In 1990, Elizabeth Newton did a fascinating psychology experiment: She paired participants into teams of two: one tapper and one listener. The tappers picked one of 25 well-known songs and would tap out the rhythm on a table. Their partner – the designated listener – was asked to guess the song. How do you think they did?

Not well. Of the 120 songs tapped out on the table, the listeners only guessed 3 of them correctly – a measly 2.5 percent. But get this: before the listeners gave their answer, the tappers were asked to predict how likely their partner was to get it right. Their guess? Tappers thought their partners would get the song 50 percent of the time. You know, only overconfident by a factor of 20. What made the tappers so far off?

They lost perspective because they were “cursed” with the additional knowledge of the song title. Chip and Dan Heath use the story in their book Made to Stick to introduce the term:

“The problem is that tappers have been given knowledge (the song title) that makes it impossible for them to imagine what it’s like to lack that knowledge. When they’re tapping, they can’t imagine what it’s like for the listeners to hear isolated taps rather than a song. This is the Curse of Knowledge. Once we know something, we find it hard to imagine what it was like not to know it. Our knowledge has “cursed” us. And it becomes difficult or us to share our knowledge with others, because we can’t readily re-create our listeners’ state of mind.”

So it goes with communicating complex information. Because we have all the background knowledge and understanding, we’re overconfident that what we’re saying is clear to everyone else. WE know what we mean! Why don’t they get it? It’s tough to remember that other people won’t make the same inferences, have the same word-meaning connections, or share our associations.

It’s particularly important in science education. The more time a person spends in a field, the more the field’s obscure language becomes second nature. Without special attention, audiences might not understand the words being used – or worse yet, they might get the wrong impression.

Over at the American Geophysical Union blog, Callan Bentley gives a fantastic list of Terms that have different meanings for scientists and the public.

What great examples! Even though the scientific terms are technically correct in context, they’re obviously the wrong ones to use when talking to the public about climate change. An inattentive scientist could know all the material but leave the audience walking away with the wrong message.

We need to spend the effort to phrase ideas in a way the audience will understand. Is that the same as “dumbing down” a message? After all, complicated ideas require complicated words and nuanced answers, right? Well, no. A real expert on a topic can give a simple distillation of material, identifying the core of the issue. Bentley did an outstanding job rephrasing technical, scientific terms in a way that conveys the intended message to the public.

That’s not dumbing things down, it’s showing a mastery of the concepts. And he was able to do it by overcoming the “curse of knowledge,” seeing the issue from other people’s perspective. Kudos to him – it’s an essential part of science education, and something I really admire.

Spinoza, Godel, and Theories of Everything

On the latest episode of Rationally Speaking, Massimo and I have an entertaining discussion with Rebecca Goldstein, philosopher, author, and recipient of the prestigious MacArthur “genius” grant. There’s a pleasing symmetry to her published oeuvre. Her nonfiction books, about people like philospher Baruch Spinoza and mathematician Kurt Godel, have the aesthetic sensibilities of novels, while her novels (most recently, “36 Arguments for the Existence of God: A Work of Fiction”) have the kind of weighty philosophical discussions one typically finds in non-fiction.

It’s a wide-ranging and fun conversation. My main complaint is just over her treatment of Spinoza. Basically, people say he “believed God was nature.” That always made me roll my eyes, because it’s not making a claim about the world, it’s merely redefining the word “God” to mean “nature,” for no good reason. I voice this complaint to Rebecca during the show and she defends Spinoza; you can see what you think of her response, but I felt it to be weak; it sounded like she was just pointing out some dubious similarities between nature and the typical conception of God.

Nevertheless! It’s certainly worth a listen:

http://www.rationallyspeakingpodcast.org/show/rs45-rebecca-newberger-goldstein-on-spinoza-goedl-and-theori.html