How to argue on the internet

At least a dozen people have sent this XKCD cartoon to me over the years.

It’s plenty hard enough to get someone to listen to your arguments in a debate, given how attached people are naturally to their own ideas and ways of thinking. But it becomes even harder when you trigger someone’s emotional side, by making them feel like you’re attacking them and putting them automatically into “defend myself” mode (or worse, “lash out” mode), rather than “listen reasonably” mode.

Unfortunately, online debates are full of emotional tripwires, partly because tone isn’t always easy to detect in the written word, and even comments intended neutrally can come off as snide or snippy… and also because not having to say something to someone’s face seems to bring out the immature child inside grown adults.

But on the plus side, debating online at least has the benefit that you can take the time to think about your wording before you comment or email someone. Below, I walk you through my process of revising my wording to reduce the risk of making someone angry and defensive, and increase my chances that they’ll genuinely consider what I have to say.

DRAFT 1 (My first impulse is to say): “You idiot, you’re ignoring…”

Duh. Get rid of the insult.

DRAFT 2: “You’re ignoring…”

I should make it clear I’m attacking an idea, not a person.

DRAFT 3: “Your argument is ignoring…”

This can still be depersonalized. By using the word “your,” I’m encouraging the person to identify the argument with himself, which can still trigger a defensive reaction when I attack the argument. That’s the exact opposite of what I want to do.

DRAFT 4: “That argument is ignoring…”

Almost perfect. The only remaining room for improvement is the word “ignoring,” which implies an intentional disregard, and sounds like an accusation. Better to use something neutral instead:

DRAFT 5: “That argument isn’t taking into account…”

Done.  Of course, chances are I still won’t persuade them, but at least I’ve given myself the best chance possible… and done my part to help keep the Internet civilized. Or at least a tiny bit less savage! 

Thinking in greyscale

Have you ever converted an image from greyscale into black and white? Basically, your graphics program rounds all of the lighter shades of grey down to “white,” and all of the darker shades of grey up to “black.” The result is a visual mess – same rough shape as the original, but unrecognizable.

Something similar happens to our mental picture of the world whenever we talk about how we “believe” or “don’t believe” an idea. Belief isn’t binary. Or at least, it shouldn’t be. In reality, while we can be more confident in the truth of some claims than others, we can’t be absolutely certain of anything. So it’s more accurate to talk about how much we believe a claim, rather than whether or not we believe it. For example, I’m at least 99% sure that the moon landing was real. My confidence that mice have the capacity to suffer is high, but not quite as high. Maybe 85%. Ask me about a less-developed animal, like a shrimp, and my confidence would fall to near-uncertainty, around 60%.

Obviously there’s no rigorous, precise way to assign a number to how confident you are about something. But it’s still valuable to get in the habit, at least, of qualifying your statements of belief with words like “probably,” or “somewhat,” or “very.” It just helps keep you thinking in greyscale, and reminds you that different amounts of evidence should yield different degrees of belief. Why lose all that resolution unnecessarily by switching to black and white?

More importantly, the reason you shouldn’t ever have 0% or 100% confidence in any empirical claim is because that implies that there is no conceivable evidence that could ever make you change your mind. You can prove this formally with Bayes’ theorem, which is a simple rule of probability that also serves as a way of describing how an ideal reasoner would update his belief in some hypothesis “H” after encountering some evidence “E.” Bayes’ theorem can be written like this:

… in other words, it’s a rule for how to take your prior probability of a hypothesis, P[H], and update it based on new evidence [E] to get the probability of H given that evidence: P[H | E].

So what happens if you think there’s zero chance of some hypothesis H being true? Well, just plug in zero for “P[H],” all the way on the right, and you’ll realize that the entire equation becomes zero (because zero times anything is zero). So you don’t have to know any of the other terms to conclude that P[H | E] = 0. That means that if you start out with zero belief in a hypothesis, you’ll always have zero belief in that hypothesis no matter what evidence comes your way.

And what if you start out convinced, beyond a shadow of a doubt, that some hypothesis is true? That’s akin to saying that P[H] = 1. That also implies you must put zero probability on all the other possible hypotheses. So plug in 1 for P[H] and 0 for P[not H] in the equation above. With just a bit of arithmetic you’ll find that P[H | E] = 1. Which means that no matter what evidence you come across, if your belief in a hypothesis is 100% before seeing some evidence (that is, P[H] = 1) then your belief in that hypothesis will still be 100% after seeing that evidence (that is, P[H | E] = 1).

As much as I’m in favor of thinking in greyscale, however, I will admit that it can be really difficult to figure out how to feel when you haven’t committed yourself wholeheartedly to one way of viewing the world. For example, if you hear that someone has been accused of rape, your estimation of the likelihood of his guilt should be somewhere between 0 and 100%, depending on the circumstances. But we want, instinctively, to know how we should feel about the suspect. And the two possible states of the world (he’s guilty/he’s innocent) have such radically different emotional attitudes associated with them (“That monster!”/”That poor man!”). So how do you translate your estimated probability of his guilt into an emotional reaction? How should you feel about him if you’re, say, 80% confident he’s guilty and 20% confident he’s innocent? Somehow, finding a weighted average of outrage and empathy doesn’t seem like the right response — and even if it were, I have no idea what that would feel like.

RS#34: Why do people listen to celebrities’ opinions?

Episode #34 of the Rationally Speaking podcast is all about celebrities giving their opinion on topics they don’t know much about: why do they get invited to opine on those topics at all, and why are people influenced by them? Even people who are experts in a technical field often give misleading viewpoints when they’re offered a platform to talk about fields other than their own. Massimo and I talk about some examples, describe some relevant psychological studies on influence, and still somehow manage to work in our standard bickering about philosophy.

http://www.rationallyspeakingpodcast.org/show/rs34-celebrities-and-the-damage-they-can-do.html

How to spot a rationalization

In this week’s video blog, I talk about why it’s important to be able to spot your own rationalizations, and tricks to help you do so:

Four ways to define “rational”

In this week’s video, I explain what I mean by “rational,” and discuss some of the other meanings that people attribute to this often-ambiguous word.

%d bloggers like this: