RS episode #53: Parapsychology

In Episode 53 of the Rationally Speaking Podcast, Massimo and I take on parapsychology, the study of phenomena such as extrasensory perception, precognition, and remote viewing. We discuss the type of studies parapsychologists conduct, what evidence they’ve found, and how we should interpret that evidence. The field is mostly not  taken seriously by other scientists, which parapsychologists argue is unfair, given that their field shows some consistent and significant results. Do they have a point? Massimo and I discuss the evidence and talk about what the results from parapsychology tell us about the practice of science in general.

http://www.rationallyspeakingpodcast.org/show/rs53-parapsychology.html

You’re such an essentialist!

My latest video blog is about essentialism, and why it’s damaging to your rationality — and your happiness.

Thoughts on science podcasting: A dispatch from ScienceOnline 2012

I’m in Raleigh, NC this weekend for the sixth annual ScienceOnline un-conference, a gathering of 450 scientists, writers, bloggers, podcasters, educators, and others interested in the way the internet is changing the way we conduct, and communicate, science. My contribution was this morning — I moderated  a discussion on science podcasting with Desiree Schell, the eloquent host of Skeptically Speaking. We made a nicely complementary team. Her podcast is live, whereas mine is pre-recorded; hers is solo; whereas mine is a dialogue with a co-host; hers focuses on the practical applications of science to people’s lives and pocketbooks (e.g., the common cold, the claims of the cosmetics industry, etc.), whereas mine is more abstract and philosophical. So our combined perspectives overlaid together created a kind of podcasting guide in 3D.

A few highlights:

What’s your niche? There are a lot of science and skepticism podcasts out there already, and Desiree and I both agreed that you need a well-defined “niche” in mind if you’re going to start your own. Maybe it’s a topic  you think isn’t being covered enough, or it’s not being covered the way you think it should be, or maybe it’s a group you want to give a voice to. But there should be some reason your podcast exists other than the fact that you want to do a podcast.

For example, I consider Rationally Speaking’s niche to be in the philosophical implications of science. So instead of just covering topics like irrationality, or the science of love, we also try to hash out questions like, Why should we try to overcome irrationality? Does it actually make us happier, and what are the ethical implications of trying to make other people more rational? And if we understand the science of love, does that change our experience of love?

And then our other niche is the question of what constitutes good evidence for a claim: To what extent do fields like evolutionary psychology, string theory, and memetics make testable predictions, and if they don’t, can we have any confidence in their claims? Can we ever generalize from case studies? How do we know which experts to trust?  A lot of skeptic podcasts and blogs highlight claims that are unambiguously pseudoscience, but I think Rationally Speaking specializes in the murkier cases.

The outline versus the map: Desiree and I talked a lot about how to make podcast interviews and conversations go smoothly. When I first started doing Rationally Speaking, I would come into our tapings with a mental outline of the topics I wanted to cover, arranged in a nice order that flowed well… and as it turns out, that’s fine for when you’re giving a lecture, solo, but it just doesn’t work when you throw other people into the mix. You don’t know what topics your guest is going to bring up that call for follow-up, and I never know what direction Massimo’s going to take the conversation in. And the problem with  having an outline in your head is that once you diverge from that outline, you have no instructions for how to get back onto it.

So what I’ve settled on instead is more of a loose, web-like structure in my mind, where the topics aren’t in any set order, but for each topic, I’ve thought about how it connects to at least a couple of other topics. That way, wherever the conversation ends up, I have this map in my head of where I can go next.

Why a podcast at all? For that matter, you should really have a reason to do a podcast rather than write a blog. Podcasts have some significant downsides, compared to blogs. On the production end, they’re a hassle to record and edit, compared to writing a post, and they commit you to a specific length and schedule. On the consumption end, they’re inconvenient in that you can’t skim them at your own pace, you can’t skip down to another section, and you don’t get links or pictures to supplement the content.

But sometimes they really are better than a blog post. I think that’s especially true for treating controversial or multifaceted topics, the kind we look for in Rationally Speaking – hearing people debate a topic is far more engaging than reading one person’s point of view. Also, as Story Collider’s Ben Lillie pointed out during the conversation, listening to a science podcast creates an intimate connection to the scientist – when you’ve got headphones on and you’re hearing the scientist’s voice as if she’s right there with you, it takes barely any time to get a taste of her personality. And science could always use a little more humanizing.

My Little Pony: Reality is Magic!

(Cross-posted at 3 Quarks Daily)

You probably won’t be very surprised to hear that someone decided to reboot the classic 80’s My Little Pony cartoon based on a line of popular pony toys. After all, sequels and shout-outs to familiar brands have become the foundation of the entertainment industry. The new ‘n improved cartoon, called My Little Pony: Friendship is Magic, follows a nerdy intellectual pony named Twilight Sparkle, who learns about the magic of friendship through her adventures with the other ponies in Ponyville.

But you might be surprised to learn that My Little Pony: Friendship is Magic’s biggest accolades have come not from its target audience of little girls and their families, but from a fervent adult fanbase. I first heard of My Little Pony: Friendship is Magic from one of my favorite sources of intelligent pop culture criticism, The Onion’s AV Club, which gave the show an enthusiastic review last year. (I had my suspicions at first that the AV Club’s enthusiasm was meant to be ironic, but they insisted that the show wore down their defenses, and that it was “legitimately entertaining and lots of fun.” So either their appreciation of My Little Pony: Friendship is Magic is genuine, or irony has gotten way more poker-faced than I realized.)

And you might be even more taken aback to learn that many, if not most, of those adult My Little Pony: Friendship is Magic fans are men and that they’ve even coined a name for themselves: “Bronies.” At least, I was taken aback. In fact, my curiosity was sufficiently piqued that I contacted Purple Tinker, the person in charge of organizing the bronies’ seasonal convention in New York City. Purple Tinker was friendly and helpful, saying that he had read about my work in the skeptic/rationalist communities, and commended me as only a brony could: “Bravo – that’s very Twilight Sparkle of you!”

But when I finally sat down and watched the show, I realized that while Purple Tinker may be skeptic-friendly, the show he loves is not. The episode I watched, “Feeling Pinkie Keen,” centers on a pony named Pinkie Pie, who interprets the twitches in her tail and the itches on her flank as omens of some impending catastrophe, big or small. “Something’s going to fall!” Pinkie Pie shrieks, a few beats before Twilight Sparkle accidentally stumbles into a ditch. The other ponies accept her premonitions unquestioningly, but empirically-minded Twilight Sparkle is certain that Pinkie Pie’s successes are either a hoax or a coincidence. She’s detemined to get to the bottom of the matter, shadowing Pinkie Pie in secret to observe whether the premonitions disappear when there’s no appreciative audience around, and hooking Pinkie Pie up to what appears to be a makeshift MRI machine which Twilight Sparkle apparently has lying around her house, to see whether the premonitions are accompanied by any unusual brain activity.

Meanwhile, Twilight Sparkle is being more than a little snotty about how sure she is that she’s right, and how she just can’t wait to see the look on Pinkie Pie’s face when Pinkie Pie gets proven wrong. Which, of course, is intended to make it all the more enjoyable to the audience when — spoiler alert! — Twilight Sparkle’s investigations yield no answers, and Pinkie Pie’s premonitions just keep coming true. Finally, Twilight Sparkle admits defeat: “I’ve learned that there are some things you just can’t explain. But that doesn’t mean they’re not true. You just have to choose to believe.”

Nooo, Twilight Sparkle, no! You are a disgrace to empirical ponies everywhere. And I’m not saying that because Twilight Sparkle “gave in” and concluded that Pinkie Pie’s premonitions were real. After all, sometimes it is reasonable to conclude that an amazing new phenomenon is more likely to be real than a hoax, or a coincidence, or an exaggeration, etc. It depends on the strength of the evidence. Rather, I’m objecting to the fact that Twilight Sparkle seems to think that because she was unable to figure out how premonitions worked, that therefore science has failed.

Twilight Sparkle is an example of a Straw Vulcan, a character who supposedly represents the height of rationality and logic, but who ends up looking like a fool compared to other, less rational characters. That’s because the Straw Vulcan brand of rationality isn’t real rationality. It’s a gimpy caricature, crafted that way either because the writers want to make rationality look bad, or because they genuinely think that’s what rationality looks like. In a talk I gave at this year’s Skepticon IV conference, I described some characteristic traits of a Straw Vulcan, such as an inability to enjoy life or feel emotions, and an unwillingness to make any decisions without all the information. Now I can add another trait to my list, thanks to Twilight Sparkle: the attitude that if we can’t figure out the explanation, then there isn’t one.

Do you think it’s possible that anyone missed the anti-inquiry message?  Hard to imagine, given the fact that the skeptical pony seems mainly motivated by a desire to prove other people wrong and gloat in their faces, and given her newly-humbled admission that “sometimes you have to just choose to believe.” But just in case there was anyone in the audience who didn’t get it yet, the writers also included a scene in which Twilight Sparkle is only able to escape from a monster by jumping across a chasm – and she’s scared, but the other ponies urge her on by crying out, “Twilight Sparkle, take a leap of faith!”

And yes, of course, My Little Pony: Friendship is Magic is “just” a kids’ cartoon, and I can understand why people might be tempted to roll their eyes at me for taking its message seriously. I don’t know to what extent children internalize the messages of the movies, TV, books, and other media they consume. But I do know that there are plenty of messages that we, as a society, would rightfully object to if we found them in a kids’ cartoon – imagine if one of the ponies played dumb to win the favors of a boy-pony and then they both lived happily ever after. Or if an episode ended with Twilight Sparkle chirping, “I’ve learned you should always do whatever it takes to impress the cool ponies!” So why aren’t we just as intolerant of a show that tells kids: “You can either be an obnoxious skeptic, or you can stop asking questions and just have faith”?

Review: The Book of Mormon

(Re-posted with permission from my article in Issue 56 of The Philosopher’s Magazine)

Even if you’ve never watched a single episode of South Park, you’re probably aware that the show’s creators, Trey Parker and Matt Stone, love nothing more than a good bout of sacred cow tipping. Show me an ideology, political, religious or otherwise, and I’ll show you an episode of South Park that lampoons it with the show’s trademark blend of incisive satire and potty humour. So it was surprising that South Park’s terrible twosome wound up creating a smash- hit Broadway musical, which they have been describing, in interviews, as being pro-faith.

Well, the “smash-hit” part isn’t surprising. The Book of Mormon pulls off the impressive trick of winking at the clichés of musical theatre and embracing them at the same time. (After all, the clichés are clichés for a reason – they work.) The story follows two young Mormon men paired together for their mission to Uganda: Kevin, who’s used to being the golden boy and needs to learn that everything’s not always about him, and Arnold, a hapless schmuck who needs to learn some self-confidence. They’re an odd couple and, like all odd couples thrown together under unusual circumstances, they’re going to have to learn to get along. There’s also a sweet Ugandan ingénue, a villainous warlord threatening her village, and a whole lot of really catchy songs. It’s no wonder the musical garnered nine Tony awards this year, including Best Musical, and that it’s been selling out its shows since it opened in previews in February.

But to hear Parker and Stone refer to The Book of Mormon as “pro-faith” was surprising, especially given how often they poke fun at Mormonism. Mormon beliefs can seem so ridiculous to outsiders, in fact, that Parker and Stone wisely realise they don’t need to do much active mocking – instead, they simply step back and let the scripture speak for itself. “I believe,” one missionary warbles in a climactic number reaffirming his commitment to his faith, “that God lives on a planet called ‘Kolob’! And I believe that in 1978, God changed his mind about black people!” With raw material like this, parody is both unnecessary and impossible.

And it’s not just Mormonism that gets skewered. It’s also the self-images of all believers who like to see themselves, and their motivations, as more saintly than they really are. Against a back- drop of war, poverty and disease, one missionary wonders, “God, why do you let bad things happen?” and then adds what is, for many people, the true concern: “More to the point, why do you let bad things happen to me?” There’s only one thing Parker and Stone enjoy sinking their talons into more than absurdity, and that’s hypocrisy.

So in what sense is The Book of Mormon “pro- faith?” Well, it’s affectionate in its portrayal of Mormons as people, most of whom come off as well-meaning, if goofy and often naive. Parker and Stone have made no secret of the fact that they find Mormons just too gosh-darned nice to dislike. But what they’re mainly referring to when they call their musical “pro-faith” is the message it sends the audience home with: that religion can be a powerful and inspiring force for good, as long as you don’t interpret scripture too strictly.

By the end of The Book of Mormon, Africans and missionaries alike are united together in a big happy posse that preaches love, joy, hope and making the world a better place. Having learned by now that it’s more important to help people than to rigidly adhere to dogma, Kevin sings, “We are still Latter Day Saints, all of us. Even if we change some things, or we break the rules, or we have complete doubt that God exists. We can still all work together and make this our paradise planet.”

That’s an appealing sentiment, especially to the sort of theatregoer who prides himself on being progressive and tolerant. It means we can promote all the values we cherish – happiness, freedom, human rights and so on – without ever having to take an unpopular anti-religion stand. But is it plausible? How, exactly, can religion make the world a better place?

I don’t know and, apparently, neither does The Book of Mormon. The central confusion you’ll notice in the musical is that it keeps conflating two very different kinds of “faith”. One could be called “figurative faith”, the warm and fuzzy kind that emerges at the end of the show, which is explicitly about bettering the world but seems to be faith in name only, as it doesn’t involve any actual belief in anything. “What happens when we’re dead? Who cares! We shouldn’t think that far ahead. The only latter day that matters is tomorrow,” the villagers sing. Once you strip away God, and an afterlife, and the requirement of belief in particular dogma, it’s not clear that what’s left bears any resemblance to religion anymore. With its progressive values and its emphasis on the here-and-now rather than the hereafter, it’s basically just humanism.

The other kind of faith in The Book of Mormon is literal faith, but for the most part, it doesn’t actually help anyone. Ugandan sweet- heart Nabalungi believes in salvation in earnest – she’s under the impression that becoming Mormon means she’s going to be transported out of her miserable life to a paradise called “Salt Lake City”, which she imagines must have huts with gold-thatched roofs and “a Red Cross on every corner with all the flour you can eat!” she sings rapturously. But she ends up crushed when she eventually learns that, no, she doesn’t get to leave Uganda after all. (“Of course, Salt Lake City’s only a metaphor,” her fellow tribe members inform her, apparently in figurative faith mode at that point.)

To be fair, there is one example of the power of literal faith in The Book of Mormon. When a villager announces his plans to circumcise his own daughter, and another is about to rape an infant in an attempt to cure himself of AIDS, Arnold manages to stop them by inventing some new scripture for the occasion. “And the Lord said, ‘If you lay with an infant, you shall burn in the fiery pits of Mordor,’” he “reads” from the Bible. (Being a science fiction and fantasy nerd, and having slept through most of Sunday school, Arnold falls back on what he knows.) So I suppose that counts as a point in favour of faith’s power to help the world, albeit conditional on the bleak premise that the only way to get people to stop raping babies and mutilating women is to threaten them with Hell … or Mordor.

Of course, the fact that The Book of Mormon’s views on faith are less than fully coherent doesn’t detract much from the pleasures of its tart- tongued satire, story, and songs. There are just a handful of moments that might raise a philosopher’s eyebrow, such as when everyone sings, in the exuberant final number, “So if you’re sad, put your hands together and pray, that tomorrow’s gonna be a Latter Day. And then it probably will be a Latter Day!” It almost feels churlish to ask “Wait, how does that work?” when everyone onstage is having such a good time singing about joy and peace and brotherhood; nevertheless, one does wonder. Maybe that will be covered in the sequel.

How rationality can make your life more awesome

(Cross-posted at Rationally Speaking)

Sheer intellectual curiosity was what first drew me to rationality (by which I mean, essentially, the study of how to view the world as accurately as possible). I still enjoy rationality as an end in itself, but it didn’t take me long to realize that it’s also a powerful tool for achieving pretty much anything else you care about. Below, a survey of some of the ways that rationality can make your life more awesome:

Rationality alerts you when you have a false belief that’s making you worse off.

You’ve undoubtedly got beliefs about yourself – about what kind of job would be fulfilling for you, for example, or about what kind of person would be a good match for you. You’ve also got beliefs about the world – say, about what it’s like to be rich, or about “what men want” or “what women want.” And you’ve probably internalized some fundamental maxims, such as: When it’s true love, you’ll know. You should always follow your dreams. Natural things are better. Promiscuity reduces your worth as a person.

Those beliefs shape your decisions about your career, what to do when you’re sick, what kind of people you decide to pursue romantically and how you pursue them, how much effort you should be putting into making yourself richer, or more attractive, or more skilled (and skilled in what?), more accommodating, more aggressive, and so on.

But where did these beliefs come from? The startling truth is that many of our beliefs became lodged in our psyches rather haphazardly. We’ve read them, or heard them, or picked them up from books or TV or movies, or perhaps we generalized from one or two real-life examples.

Rationality trains you to notice your beliefs, many of which you may not even be consciously aware of, and ask yourself: where did those beliefs come from, and do I have good reason to believe they’re accurate? How would I know if they’re false? Have I considered any other, alternative hypotheses?

Rationality helps you get the information you need.

Sometimes you need to figure out the answer to a question in order to make an important decision about, say, your health, or your career, or the causes that matter to you. Studying rationality reveals that some ways of investigating those questions are much more likely to yield the truth than others. Just a few examples:

“How should I run my business?” If you’re looking to launch or manage a company, you’ll have a huge leg up over your competition if you’re able to rationally determine how well your product works, or whether it meets a need, or what marketing strategies are effective.

“What career should I go into?” Before committing yourself to a career path, you’ll probably want to learn about the experiences of people working in that field. But a rationalist also knows to ask herself, “Is my sample biased?” If you’re focused on a few famous success stories from the field, that doesn’t tell you very much about what a typical job is like, or what your odds are of making it in that field.

It’s also an unfortunate truth that not every field uses reliable methods, and so not every field produces true or useful work. If that matters to you, you’ll need the tools of rationality to evaluate the fields you’re considering working in. Fields whose methods are controversial include psychotherapy, nutrition science, economics, sociology, consulting, string theory, and alternative medicine.

“How can I help the world?” Many people invest huge amounts of money, time, and effort in causes they care about. But if you want to ensure that your investment makes a difference, you need to be able to evaluate the relevant evidence. How serious of a problem is, say, climate change, or animal welfare, or globalization? How effective is lobbying, or marching, or boycotting? How far do your contributions go at charity X versus charity Y?

Rationality shows you how to evaluate advice.

Learning about rationality, and how widespread irrationality is, sparks an important realization: You can’t assume other people have good reasons for the things they believe. And that means you need to know how to evaluate other people’s opinions, not just based on how plausible their opinions seem, but based on the reliability of the methods they used to form those opinions.

So when you get business advice, you need to ask yourself: What evidence does she have for that advice, and are her circumstances relevant enough to mine? The same is true when a friend swears by some particular remedy for acne, or migraines, or cancer. Is he repeating a recommendation made by multiple doctors? Or did he try it once and get better? What kind of evidence is reliable?

In many cases, people can’t articulate exactly how they’ve arrived at a particular belief; it’s just the product of various experiences they’ve had and things they’ve heard or read. But once you’ve studied rationality, you’ll recognize the signs of people who are more likely to have accurate beliefs: People who adjust their level of confidence to the evidence for a claim; people who actually change their minds when presented with new evidence; people who seem interested in getting the right answer rather than in defending their own egos.

Rationality saves you from bad decisions.

Knowing about the heuristics your brain uses and how they can go wrong means you can escape some very common, and often very serious, decision-making traps.

For example, people often stick with their original career path or business plan for years after the evidence has made clear that it was a mistake, because they don’t want their previous investment to be wasted. That’s thanks to the sunk cost fallacy. Relatedly, people often allow cognitive dissonance to convince them that things aren’t so bad, because the prospect of changing course is too upsetting.

And in many major life decisions, such as choosing a career, people envision one way things could play out (“I’m going to run my own lab, and live in a big city…”) – but they don’t spend much time thinking about how probable that outcome is, or what the other probable outcomes are. The narrative fallacy is that situations imagined in high detail seem more plausible, regardless of how probable they actually are.

Rationality trains you to step back from your emotions so that they don’t cloud your judgment.

Depression, anxiety, anger, envy, and other unpleasant and self-destructive emotions tend to be fueled by what cognitive therapy calls “cognitive distortions,” irrationalities in your thinking such as jumping to conclusions based on limited evidence; focusing selectively on negatives; all-or-nothing thinking; and blaming yourself, or someone else, without reason.

Rationality breaks your habit of automatically trusting your instinctive, emotional judgments, encouraging you instead to notice the beliefs underlying your emotions and ask yourself whether those beliefs are justified.

It also trains you to notice when your beliefs about the world are being colored by what you want, or don’t want, to be true. Beliefs about your own abilities, about the motives of other people, about the likely consequences of your behavior, about what happens after you die, can be emotionally fraught. But a solid training in rationality keeps you from flinching away from the truth – about your situation, or yourself — when learning the truth can help you change it.

What’s so special about living longer?

Atheist death panel: Red America's suspicions confirmed.

After reading about the death panel we held at Skepticon IV last week, a very clever philosopher friend of mine named Henry Shevlin wrote to me with a challenge to the transhumanist perspective. The transhumanist argument, which Eliezer made eloquently in the panel, is that death is a terrible thing that we should be striving to prevent for as long as possible.
Henry asks:

“Is death a tragedy because it involves a possible loss of utility, or because there’s some special harm in the annihilation of the individual? So consider two scenarios… Earth 1B and Earth 2B. Both of them have 100 million inhabitants at any one time. But Earth 1B has a very high life expectancy and a very low birth rate, while Earth 2B has a lower life expectancy and a very high birth rate. Otherwise, though, the two worlds are very similar. Which world is morally superior, by which I mean, generates more utils? “

Good question. Why, exactly, is prolonging existing lives better than creating new lives?

Let’s start with Henry’s Option 1 — that a person’s death is a tragedy because of the loss of the utility that person would have had, if he hadn’t died. Starting with this premise, can we justify our intuition that it’s better to sustain a pre-existing life than to create a new one?

One possible tack is to say that we can only compare utilities of possible outcomes for currently existing people — so the utility of adding a new, happy person to this world is undefined (and, being undefined, it can’t compensate for the utility lost from an existing person’s death). Sounds reasonable, perhaps. But that also implies that the utility of adding a new, miserable person to this world is undefined. That doesn’t sound right! I definitely want a moral theory which says that it’s bad to create beings whose lives are sheer agony.

You might also be tempted to argue that utility’s not fungible between people. In other words, my loss of utility from dying can’t be compensated for by the creation of new utility somewhere else in the world. But that renders utilitarianism completely useless! If utility’s not fungible, then you can’t say that it’s good for me to pay one penny to save you from a lifetime of torture.

Or you could just stray from utilitarianism in this case, and claim that the loss of a life is bad not just because of the loss of utility it causes. That’s Henry’s Option 2 — that death is a tragedy because there’s some special harm in the annihilation of the individual. You could then argue that the harm caused by the death of an existing person vastly outweighs the good caused by creating a new person. I’m uncomfortable with this idea, partly because there doesn’t seem to be any way to quantify the value of a life if you’re not willing to stick to the measuring system of utils. But I’m also uncomfortable with it because it seems to imply that it’s always bad to create new people, since, after all, the badness of their deaths is going to outweigh the good of their lives.

ETA: Of course, you could also argue that you care more about the utils experienced by your friends and family than about the utils that would be experienced by new people. That’s probably true, for most people, and understandably so. But it doesn’t resolve the question of why you should prefer that an unknown stranger’s life be prolonged than that a new life be created.

The Straw Vulcan: Hollywood’s illogical approach to logical decisionmaking

I gave a talk at Skepticon IV last weekend about Vulcans and why they’re a terrible example of rationality. I go through five principles of Straw Vulcan Rationality(TM), give examples from Star Trek and from real life, and explain why they’re mistaken:

  1. Being rational means expecting everyone else to be rational too.
  2. Being rational means you should never make a decision until you have all the information.
  3. Being rational means never relying on intuition.
  4. Being rational means eschewing emotion.
  5. Being rational means valuing only quantifiable things — like money, productivity, or efficiency.

In retrospect, I would’ve streamlined the presentation more, but I’m happy with the content —  I think it’s an important and under-appreciated topic. The main downside was just that everyone wanted to talk to me afterwards, not about rationality, but about Star Trek. I don’t know the answer to your obscure trivia questions, Trekkies!

 

UPDATE: I’m adding my diagrams of the Straw Vulcan model of ideal decisionmaking, and my proposed revisions to it, since those slides don’t appear in the video:

The Straw Vulcan view of the relationship between rationality and emotion.

After my revisions.

RS #48: Philosophical Counseling

Can philosophy be a form of therapy? On the latest episode of Rationally Speaking, we interview Lou Marinoff, a philosopher who founded the field of “philosophical counseling,” in which people pay philosophers to help them deal with their own personal problems using philosophy. For example, one of Lou’s clients wanted advice on whether to quit her finance job to pursue a personal goal; another sought help deciding how to balance his son’s desire to go to Disneyland with his own fear of spoiling his children.

As you can hear in the interview, I’m interested but I’ve got major reservations. I certainly think that philosophy can improve how you live your life — I’ve got some great examples of that from personal experience. But I’m skeptical of Lou’s project for two related reasons: first, because I think most problems in people’s lives are best addressed by a combination of psychological science and common sense. They require a sophisticated understanding how our decision-making algorithms go wrong — for example, why we make decisions that we know are bad for us, how we end up with distorted views of our situations and of our own strengths and weaknesses, and so on. Those are empirical questions, and philosophy’s not an empirical field, so relying on philosophy to solve people’s problems is going to miss a large part of the picture.

The other problem is that it wasn’t at all clear to me how philosophical counselors choose which philosophy to cite. For any viewpoint in the literature, you can pretty reliably find an opposing one. In the case of the father afraid of spoiling his kid, Lou cited Aristotle to argue for an “all things in moderation” policy. But, I pointed out, he could just as easily have cited Stoic philosophers arguing that happiness lies in relinquishing desires.  So if you can pick and choose any philosophical advice you want, then aren’t you really just giving your client your own opinion about his problem, and just couching your advice in the words of a prestigious philosopher?

Hear more at Rationally Speaking Episode 48, “Philosophical Counseling.”

What do philosophers think about intuition?

Earlier this year I complained, on Rationally Speaking, about the fact that so many philosophers think it’s sufficient to back up their arguments by citing “intuition.” It’s a tricky term to pin down, but generally philosophers cite intuition when they think something is “clearly true” but can’t demonstrate it with logic or evidence. So, for example, philosophers of ethics will often claim that things are “good” or “bad” by citing their intuition. And philosophers of mind will cite their intuitions to argue that certain things would or wouldn’t be conscious (for example, David Chalmers relies on intuition to argue for the theoretical possibility of “philosophical zombies,” creatures that would act and respond exactly like conscious human beings, but which wouldn’t be conscious).

I cited many examples, not only of philosophers using intuitions as evidence, but of philosophers acknowledging that appeals to intuition are ubiquitous in the field. (“Intuitions often play the role that observation does in science – they are data that must be explained, confirmers or the falsifiers of theories,” wrote one philosopher.) That’s worrisome, to me, because the whole point of philosophy is allegedly to figure out whether our intuitive judgments make sense. It’s also worrisome to me because intuitions vary sharply from person to person; for example, I don’t agree at all with G. E. Moore’s argument that it is intuitively obvious that it’s “better” to have a planet full of sunsets and waterfalls than one with filth, even if no one ever gets to see that planet. (He may prefer a universe that contains Planet Waterfall to one that contains Planet Filthy, but I don’t think that makes the former objectively “better.”)

In the comment thread under his response-post, Massimo objected that intuitions are not, in fact, widespread in philosophy. “Julia, a list of cherry picked citations an argument doesn’t make,” he wrote, and he asked me if I had randomly polled philosophers. I hadn’t, of course.

But I recently came across two people who did. Kuntz & Kuntz’s “Surveying Philosophers About Philosophical Intuition,” from the March issue of the Review of Philosophy and Psychology, surveyed 282 academic philosophers and found that 51% of them thought that intuitions are “useful to justification in philosophical methods.”

Because the term “intuition” is so nebulous, the researchers also presented their survey respondents with a list of some of the more common ways of defining intuition, and asked them to rank how apt they thought the definitions were. The top two “most apt” definitions of intuition were the following:

  1. “Judgment that is not made on the basis of some kind of observable and explicit reasoning process”
  2. “An intellectual happening whereby it seems that something is the case without arising from reasoning, or sensorial perceiving, or remembering.”

The survey also shed light on one reason why Massimo, a philosopher of science, might have underestimated the prevalence of appeals to intuition in philosophy as a whole: “In regard to the usefulness of intuitions to justification, our results also revealed that philosophers of science expressed significantly lower agreement than philosophers doing metaphysics, epistemology, ethics, and philosophy of mind,” Kuntz and Kuntz wrote. That squares with my experience, too — most of the philosophy of science I’ve read has been grounded in logic, math, and evidence.

Another important side point the researchers make is there’s more than one way to use your intuitions. Philosophers certainly do use them as justification for claims, but they also use intuitions to generate claims which they then justify using more rigorous methods like logic and evidence. 83% of survey respondents agreed that intuitions are useful in that latter way, and I agree too — I have no problem with people using intuition to generate possible ideas, I just have a problem with people saying “This feels intuitively true to me, so it must be true.”

%d bloggers like this: