## A Pretty-Good Mathematical Model of Perfectionism

I struggle with perfectionism. Well, not so much “struggle with” — I’m f*cking great at it. It comes naturally.

There are some upsides, but perfectionism is also associated with anxiety, depression, procrastination, and damaged relationships. Perhaps you, like I, have spent far too much time and emotional energy making sure that an email had the right word choice, had no typos, didn’t reuse a phrase in successive sentences/paragraphs, and closed with the ‘correct’ sign-off. (‘Best,’ is almost always optimal, by the way).

“If I couldn’t do something that rated 10 out of 10 — or at least close to that — I didn’t want to do it at all. Being a perfectionist was an ongoing source of suffering and unhappiness for me … Unfortunately, many of us have been conditioned to hold ourselves to impossible standards. This is a stressful mind state to live in, that’s for sure.” ~ Tony Bernard J.D.

The topic of perfectionism confused me for years. Of course you want things to be perfect; why would you ever actively want something to be worse? However, there’s way more to it than that: It’s a complex interplay between effort, time, motivation, and expectations.

Far too many self-help recommendations essentially said “Be ok with mediocrity!” which… did not speak to me, to say the least.

To better understand the concept, I went through a number of books and papers before building a quasi-mathematical model. You know, like ya’do.

I’ve come to see perfectionism as a mindset with a particular calibration between the quality of your work and your emotional reaction — with decreased sensitivity to marginal differences in lower-quality work and increasing sensitivity as the quality goes up.

• In a “Balanced” mindset, you become happier in linear proportion to how much better your work is going. (y = x)
• In a “Satisficing” mindset — taking a pass/fail test, for example — you care about whether something is “good enough”. Most of your emotional variance comes as you approach and meet that threshold.  ( e^x / (1+e^x) )
• In a Perfectionist mindset, the relationship between quality and emotion is polynomial. You feel almost equally bad about scoring a 40% on a test vs. a 65%, but the difference between a 90% and 93% looms large. (y = x^7)

Looking at the model, I realized it could explain a number of experiences I’d had.

### Why even small tasks seem daunting to a perfectionist

A common experience with a perfectionist mindset is having trouble ‘letting go’ of a project — we want to keep tinkering with it, improving it, and never feel quite comfortable moving on.  (I don’t want to say how long this draft sat around.)

This make sense given the model:

When I think about clicking ‘send’ or ‘post’ before I’ve checked for typos, before I’ve reread everything, before considering where it might be wrong or unclear… it just feels, well, WRONG. I’m not yet happy with it and have trouble declaring it done.

Apart from requiring more time and effort, this can make even seemingly trivial tasks feel daunting. Internally, if you know that a short email will take an hour and a half it’s going to loom large even if you have trouble explaining quite why such a small thing is making you feel overwhelmed.

What’s helped me: A likely culprit is overestimating the consequences of mistakes. One solution is to be concrete and write down what you expect to happen if it turns out you have a typo, miss a shot, or bomb a test. Sometimes all it takes to readjust is examining those expectations consciously. Other times you’ll need to experience the ‘failure’, at which point you can compare it to your stated expectations.

### Why perfectionists give up on hobbies and tasks easily

Another way to look at this is: if you don’t expect to reach high standards, a project just doesn’t seem worth doing.

The result is a kind of min-max of approach to life: If you can’t excel, don’t bother spending time on it.

That’s not necessarily a bad thing!

However, we don’t always have control. In my nonprofit communications career, I sometimes got assigned to write press releases on topics that *might* get attention, but which seemed not newsworthy to me. It may have still been worth the few hours of my time in case it grabbed a reporter’s eye. It was important to keep my job. But I had so. much. trouble. getting myself to do the work.

Even in the personal realm, picking up a new hobby is made difficult. If it doesn’t seem like you’re going to be amazing at it, the hobby as a whole loses its luster.

What’s helped me: A big problem for me has been overlooking the benefits gained from so-called “failure”. Once I start to factor in e.g. how much I expect to learn (so that I can do better in the future) I end up feeling much better about giving things a shot.

### Why procrastination (and anxiety) are common

At a granular scale, the problem becomes worse. Rather than “How good do I expect to feel at the end of this?” our emotional reaction is probably trained by the in-the-moment “How much happier do I expect to feel as a result of one more bit of work?”

In other words, we can view the derivative/slope of these graphs as motivation:

With a perfectionist mindset, the bigger and further away a goal is, the more difficult it will be to feel motivated in the moment.  For much of the time, we’re trying to push ourselves to work without getting any internal positive reinforcement.

This is a particular issue in the Effective Altruism movement where the goal is to *checks notes* Save the World. Also, to (“Figure out how to do the most good, and then do it.”)

It’s true that as a perfectionist nears their goal, they’re extremely motivated! But that also means that the stakes are very high for every decision and every action.  …Which is a recipe for anxiety. Terrific.

What’s helped me: To the extent that I can, I find that breaking tasks into pieces helps. If I think of my goal as “Save the World”, another day of work won’t feel very important. But a goal of “Finish reading another research paper” is something I can make real progress on in a day!

## All models are wrong, but some are useful

This framework isn’t perfect. Neither is this writeup. (I’m hyper-aware.) But this idea has been in my head, in my drafts folder, and unfinished for months. Rather than give in to the sense that I “should” keep working on it, I’m going to try following my own advice. I’m remembering that:

• I’ve clarified my thinking a ton by writing everything down.
• The consequences of a sloppy post in are minimal in the big scheme of things.
• This isn’t supposed to be my final conclusion – it’s one step on the path

Even if it’s not perfect, perhaps the current iteration of this framework can help you understand me, yourself, or perfectionists in your life.

I used to have this “DONE IS BETTER THAN PERFECT” poster draped over a chair in my office. I never got around to hanging it up, but honestly? It seems better that way.

The-Perfectionist-Script-for-self-defeat by David Burns (pdf)

When Perfect Isn’t Good Enough by

Mastering the Art of Quitting by Peg Streep & Alan Bernstein

Better By Mistake by Alina Tugend

The Procrastination Equation by Piers Steel

## Which Cognitive Bias is Making NFL Coaches Predictable?

In football, it pays to be unpredictable (although the “wrong way touchdown” might be taking it a bit far.) If the other team picks up on an unintended pattern in your play calling, they can take advantage of it and adjust their strategy to counter yours. Coaches and their staff of coordinators are paid millions of dollars to call plays that maximize their team’s talent and exploit their opponent’s weaknesses.

That’s why it surprised Brian Burke, formerly of AdvancedNFLAnalytics.com (and now hired by ESPN) to see a peculiar trend: football teams seem to rush a remarkably high percent on 2nd and 10 compared to 2nd and 9 or 11.

What’s causing that?

His insight was that 2nd and 10 disproportionately followed an incomplete pass. This generated two hypotheses:

1. Coaches (like all humans) are bad at generating random sequences, and have a tendency to alternate too much when they’re trying to be genuinely random. Since 2nd and 10 is most likely the result of a 1st down pass, alternating would produce a high percent of 2nd down rushes.
2. Coaches are suffering from the ‘small sample fallacy’ and ‘recency bias’, overreacting to the result of the previous play. Since 2nd and 10 not only likely follows a pass, but a failed pass, coaches have an impulse to try the alternative without realizing they’re being predictable.

These explanations made sense to me, and I wrote about phenomenon a few years ago. But now that I’ve been learning data science, I can dive deeper into the analysis and add a hypothesis of my own.

The following work is based on the play-by-play data for every NFL game from 2002 through 2012, which Brian kindly posted. I spend some time processing it to create variables like Previous Season Rushing %, Yards per Pass, Yards Allowed per Pass by Defense, and QB Completion percent. The Python notebooks are available on my GitHub, although the data files were too large to host easily.

## Irrationality? Or Confounding Variables?

Since this is an observational study rather than a randomized control trial, there are bound to be confounding variables. In our case, we’re comparing coaches’ play calling on 2nd down after getting no yards on their team’s 1st down rush or pass. But those scenarios don’t come from the same distribution of game situations.

A number of variables could be in play, some exaggerating the trend and others minimizing it. For example, teams that passed for no gain on 1st down (resulting in 2nd and 10) have a disproportionate number of inaccurate quarterbacks (the left graph). These teams with inaccurate quarterbacks are more likely to call rushing plays on 2nd down (the right graph). Combine those factors, and we don’t know whether any difference in play calling is caused by the 1st down play type or the quality of quarterback.

The classic technique is to train a regression model to predict the next play call, and judge a variable’s impact by the coefficient the model gives that variable.  Unfortunately, models that give interpretable coefficients tend to treat each variables as either positively or negatively correlated with the target – so time remaining can’t be positively correlated with a coach calling running plays when the team is losing and negatively correlated when the team is winning. Since the relationships in the data are more complicated, we needed a model that can handle it.

I saw my chance to try a technique I learned at the Boston Data Festival last year: Inverse Probability of Treatment Weighting.

In essence, the goal is to create artificial balance between your ‘treatment’ and ‘control’ groups — in our case, 2nd and 10 situations following 1st down passes vs. following 1st down rushes. We want to take plays with under-represented characteristics and ‘inflate’ them by pretending they happened more often, and – ahem – ‘deflate’ the plays with over-represented features.

To get a single metric of how over- or under-represented a play is, we train a model (one that can handle non-linear relationship better) to take each 2nd down play’s confounding variables as input – score, field position, QB quality, etc – and tries to predict whether the 1st down play was a rush or pass. If, based on the confounding variables, the model predicts the play was 90% likely to be after a 1st down pass – and it was – we decide the play probably has over-represented features and we give it less weight in our analysis. However, if the play actually followed a 1st down rush, it must have under-represented features for the model to get it so wrong. Accordingly, we decide to give it more weight.

After assigning each play a new weight to compensate for its confounding features (using Kfolds to avoid training the model on the very plays it’s trying to score), the two groups *should* be balanced. It’s as though we were running a scientific study, noticed that our control group had half as many men as the treatment group, and went out to recruit more men. However, since that isn’t an option, we just decided to count the men twice.

## Testing our Balance

Before processing, teams that rushed on 1st down for no gain were disproportionately likely to be teams with the lead. After the re-weighting process, the distributions are far much more similar:

Much better! They’re not all this dramatic, but lead was the strongest confounding factor and the model paid extra attention to adjust for it.

It’s great that the distributions look more similar, but that’s qualitative. To do a quantitative diagnostic, we can take the standard difference in means, recommended as a best practice in a 2015 paper by Peter C. Austin and Elizabeth A. Stuart titled “Moving towards best practice when using inverse probability of treatment weighting (IPTW) using the propensity score to estimate causal treatment effects in observational studies“.

For each potential confounding variable, we take the difference in means between plays following 1st down passes and 1st down rushes and adjust for their combined variance. A high standard difference of means indicates that our two groups are dissimilar, and in need of balancing. The standardized differences had a max of around 47% and median of 7.5% before applying IPT-weighting, which reduced the differences to 9% and 3.1%, respectively.

So, now that we’ve done what we can to balance the groups, do coaches still call rushing plays on 2nd and 10 more often after 1st down passes than after rushes? In a word, yes.

In fact, the pattern is even stronger after controlling for game situation. It turns out that the biggest factor was the score (especially when time was running out.) A losing team needs to be passing the ball more often to try to come back, so their 2nd and 10 situations are more likely to follow passes on 1st down. If those teams are *still* calling rushing plays often, it’s even more evidence that something strange is going on.

Ok, so controlling for game situation doesn’t explain away the spike in rushing percent at 2nd and 10. Is it due to coaches’ impulse to alternate their play calling?

Maybe, but that can’t be the whole story. If it were, I would expect to see the trend consistent across different 2nd down scenarios. But when we look at all 2nd-down distances, not just 2nd and 10, we see something else:

If their teams don’t get very far on 1st down, coaches are inclined to change their play call on 2nd down. But as a team gains more yards on 1st down, coaches are less and less inclined to switch. If the team got six yards, coaches rush about 57% of the time on 2nd down regardless of whether they ran or passed last play. And it actually reverses if you go beyond that – if the team gained more than six yards on 1st down, coaches have a tendency to repeat whatever just succeeded.

It sure looks like coaches are reacting to the previous play in a predictable Win-Stay Lose-Shift pattern.

Following a hunch, I did one more comparison: passes completed for no gain vs. incomplete passes. If incomplete passes feel more like a failure, the recency bias would influence coaches to call more rushing plays after an incompletion than after a pass that was caught for no gain.

Before the re-weighting process, there’s almost no difference in play calling between the two groups – 43.3% vs. 43.6% (p=.88). However, after adjusting for the game situation – especially quarterback accuracy – the trend reemerges: in similar game scenarios, teams rush 44.4% of the time after an incomplete and only 41.5% after passes completed for no gain. It might sound small, but with 20,000 data points it’s a pretty big difference (p < 0.00005)

All signs point to the recency bias being the primary culprit.

## Reasons to Doubt:

1) There are a lot of variables I didn’t control for, including fatigue, player substitutions, temperature, and whether the game clock was stopped in between plays. Any or all of these could impact the play calling.

2) Brian Burke’s (and my) initial premise was that if teams are irrationally rushing more often after incomplete passes, defenses should be able to prepare for this and exploit the pattern. Conversely, going against the trend should be more likely to catch the defense off-guard.

I really expected to find plays gaining more yards if they bucked the trends, but it’s not as clear as I would like.  I got excited when I discovered that rushing plays on 2nd and 10 did worse if the previous play was a pass – when defenses should expect it more. However, when I looked at other distances, there just wasn’t a strong connection between predictability and yards gained.

One possibility is that I needed to control for more variables. But another possibility is that while defenses *should* be able to exploit a coach’s predictability, they can’t or don’t. To give Brian the last words:

But regardless of the reasons, coaches are predictable, at least to some degree. Fortunately for offensive coordinators, it seems that most defensive coordinators are not aware of this tendency. If they were, you’d think they would tip off their own offensive counterparts, and we’d see this effect disappear.

## Why Decision Theory Tells You to Eat ALL the Cupcakes

Imagine that you have a big task coming up that requires an unknown amount of willpower – you might have enough willpower to finish, you might not. You’re gearing up to start when suddenly you see a delicious-looking cupcake on the table. Do you indulge in eating it? According to psychology research and decision-theory models, the answer isn’t simple.

If you resist the temptation to eat the cupcake, current research indicates that you’ve depleted your stores of willpower (psychologists call it ego depletion), which causes you to be less likely to have the willpower to finish your big task. So maybe you should save your willpower for the big task ahead and eat it!

…But if you’re convinced already, hold on a second. How easily you give in to temptation gives evidence about your underlying strength of will. After all, someone with weak willpower will find the reasons to indulge more persuasive. If you end up succumbing to the temptation, it’s evidence that you’re a person with weaker willpower, and are thus less likely to finish your big task.

How can eating the cupcake cause you to be more likely to succeed while also giving evidence that you’re more likely to fail?

### Conflicting Decision Theory Models

The strangeness lies in the difference between two conflicting models of how to make decisions. Luke Muehlhauser describes them well in his Decision Theory FAQ:

This is not a “merely verbal” dispute (Chalmers 2011). Decision theorists have offered different algorithms for making a choice, and they have different outcomes. Translated into English, the [second] algorithm (evidential decision theory or EDT) says “Take actions such that you would be glad to receive the news that you had taken them.” The [first] algorithm (causal decision theory or CDT) says “Take actions which you expect to have a positive effect on the world.”

The crux of the matter is how to handle the fact that we don’t know how much underlying willpower we started with.

Causal Decision Theory asks, “How can you cause yourself to have the most willpower?”

It focuses on the fact that, in any state, spending willpower resisting the cupcake causes ego depletion. Because of that, it says our underlying amount of willpower is irrelevant to the decision. The recommendation stays the same regardless: eat the cupcake.

Evidential Decision Theory asks, “What will give evidence that you’re likely to have a lot of willpower?”

We don’t know whether we’re starting with strong or weak will, but our actions can reveal that one state or another is more likely. It’s not that we can change the past – Evidential Decision Theory doesn’t look for that causal link – but our choice indicates which possible version of the past we came from.

Yes, seeing someone undergo ego depletion would be evidence that they lost a bit of willpower.  But watching them resist the cupcake would probably be much stronger evidence that they have plenty to spare.  So you would rather “receive news” that you had resisted the cupcake.

### A Third Option

Each of these models has strengths and weaknesses, and a number of thought experiments – especially the famous Newcomb’s Paradox – have sparked ongoing discussions and disagreements about what decision theory model is best.

One attempt to improve on standard models is Timeless Decision Theory, a method devised by Eliezer Yudkowsky of the Machine Intelligence Research Institute.  Alex Altair recently wrote up an overview, stating in the paper’s abstract:

When formulated using Bayesian networks, two standard decision algorithms (Evidential Decision Theory and Causal Decision Theory) can be shown to fail systematically when faced with aspects of the prisoner’s dilemma and so-called “Newcomblike” problems. We describe a new form of decision algorithm, called Timeless Decision Theory, which consistently wins on these problems.

It sounds promising, and I can’t wait to read it.

### But Back to the Cupcakes

For our particular cupcake dilemma, there’s a way out:

Precommit. You need to promise – right now! – to always eat the cupcake when it’s presented to you. That way you don’t spend any willpower on resisting temptation, but your indulgence doesn’t give any evidence of a weak underlying will.

And that, ladies and gentlemen, is my new favorite excuse for why I ate all the cupcakes.

## Will moving to California make you happier?

I pass dozens of brilliantly-colored flowers like this on my daily walk to work. (Photo credit: B Mully, Flickr)

I might have to disagree with a Nobel Laureate on this one.

According to Daniel Kahneman, Nobel prize-winning psychologist and author of the excellent Thinking Fast and Slow, the answer is “No.” A recent post on Big Think describes how Kahneman asked people to predict who’s happier, on average, Californians or Midwesterners. Most people (from both regions!) say, “Californians.” That’s because, Kahneman explains, the act of comparison highlights what’s saliently different between the two regions: their climate. And on that dimension, California’s a pretty clear winner.

And indeed, Californians report loving their climate and Midwesterners loathing theirs. Yet despite that, the overall life satisfaction in the two regions turns out to be nearly identical, according to a 1998 survey by Kahneman. Climate just isn’t that important to happiness, it turns out. The fact that it greatly influences people’s predictions of relative happiness in California vs. the Midwest stems from something called the “Focusing illusion,” Kahneman explains — a bias he sums up with the pithy, “Nothing in life is as important as you think it is when you are thinking about it.”

So far, I have no beef with this interpretation. What I *do* object to is the conclusion, which Kahneman implies and Big Think makes explicit, that “moving to california won’t make you happy.”

I moved from New York, NY to Berkeley, CA, earlier this year, and — having read Kahneman — I didn’t expect the climate to make a noticeable difference in my mood. And yet, every day, when I would leave my house, I found my spirits buoyed by the balmy weather and the clear blue sky. I noticed, multiple times daily, how beautiful the vegetation was and how fresh, fragrant, and — well — un-Manhattanlike the air smelled. It made a noticeable difference in my mood nearly every day, and continues to, six months after I moved.

I was a little surprised that my result was so different from Kahneman’s. And then I realized: Most of those Californians in his study have always been Californians. They grew up there; they didn’t move from the Midwest (or Manhattan) to California. So it’s understandable that their climate doesn’t make a big impact on their happiness, because they have no standard of comparison. They’re not constantly thinking to themselves — as I have been — “Man, it’s so *nice* not to have to shiver inside a bulky winter coat!” or “Man, it’s such a relief not to smell garbage bags sitting out on the sidewalk,” or “Wow, it’s quite pleasant not to be sticky with sweat.”

I’m only one data point, of course, and it’s possible that if you studied people who moved from the Midwest to CA, you’d find that their change in happiness was in fact no different than that of people who moved from CA to the Midwest. But at least, I think it’s important to note that that’s not the study Kahneman did. And that, as a general rule in reading (or conducting!) happiness research, it’s important to remember that the happiness you get from a state depends on your previous states.

## A rational view of tradition

In my latest video blog I answer a listener’s question about why rationalists are more likely to abandon social norms like marriage, monogamy, standard gender roles, having children, and so on. And then I weigh in on whether that’s a rational attitude to take:

## RS episode #53: Parapsychology

In Episode 53 of the Rationally Speaking Podcast, Massimo and I take on parapsychology, the study of phenomena such as extrasensory perception, precognition, and remote viewing. We discuss the type of studies parapsychologists conduct, what evidence they’ve found, and how we should interpret that evidence. The field is mostly not  taken seriously by other scientists, which parapsychologists argue is unfair, given that their field shows some consistent and significant results. Do they have a point? Massimo and I discuss the evidence and talk about what the results from parapsychology tell us about the practice of science in general.

http://www.rationallyspeakingpodcast.org/show/rs53-parapsychology.html

## You’re such an essentialist!

My latest video blog is about essentialism, and why it’s damaging to your rationality — and your happiness.

## RS #48: Philosophical Counseling

Can philosophy be a form of therapy? On the latest episode of Rationally Speaking, we interview Lou Marinoff, a philosopher who founded the field of “philosophical counseling,” in which people pay philosophers to help them deal with their own personal problems using philosophy. For example, one of Lou’s clients wanted advice on whether to quit her finance job to pursue a personal goal; another sought help deciding how to balance his son’s desire to go to Disneyland with his own fear of spoiling his children.

As you can hear in the interview, I’m interested but I’ve got major reservations. I certainly think that philosophy can improve how you live your life — I’ve got some great examples of that from personal experience. But I’m skeptical of Lou’s project for two related reasons: first, because I think most problems in people’s lives are best addressed by a combination of psychological science and common sense. They require a sophisticated understanding how our decision-making algorithms go wrong — for example, why we make decisions that we know are bad for us, how we end up with distorted views of our situations and of our own strengths and weaknesses, and so on. Those are empirical questions, and philosophy’s not an empirical field, so relying on philosophy to solve people’s problems is going to miss a large part of the picture.

The other problem is that it wasn’t at all clear to me how philosophical counselors choose which philosophy to cite. For any viewpoint in the literature, you can pretty reliably find an opposing one. In the case of the father afraid of spoiling his kid, Lou cited Aristotle to argue for an “all things in moderation” policy. But, I pointed out, he could just as easily have cited Stoic philosophers arguing that happiness lies in relinquishing desires.  So if you can pick and choose any philosophical advice you want, then aren’t you really just giving your client your own opinion about his problem, and just couching your advice in the words of a prestigious philosopher?

Hear more at Rationally Speaking Episode 48, “Philosophical Counseling.”

## Why this Meme Exploded

[cross-posted on Friendly Atheist]

Somehow, it went viral. In just 24 hours, the Secular Student Alliance (my organization)’s Facebook page exploded from 6,500 supporters’ “likes” to 18,000. I found myself thinking, “How the hell did that happen?” And then thinking, “Hmm… how can we do it again?”

The whole thing started with Kenny Flagg, one of our group leaders with the Freethinkers of UND. After noticing that the SSA’s Facebook presence was much smaller than Campus Crusade for Christ’s, he wanted to make a difference. He “grabbed both profile pictures for the groups, added the stats from each page, and threw in a quick meme for good measure.” Then he posted it on Reddit. That was it. Take a look – would you would expect it to inspire a frenzy of activity?

Yes, this is the image that launched a thousand clicks. Well, several thousand, actually.

I had to figure out why a simple picture like this inspired such a big reaction. The more I thought about it, the more psychology and rhetorical communication techniques I saw present. Kenny:

1. Demonstrated insider status
2. Invoked tribal/patriotic feelings, and
3. Gave people direction.

Well look at that. In classic style, he hit the three branches of rhetoric: Ethos, Pathos, and Logos.

## Kenny’s Insider Status (Ethos)

Kenny was a perfect person for the task. If my coworkers or I had been the ones to post, we would seem self-serving. Kenny, not being an SSA employee, comes across as a more objective voice. Do you trust the used car salesman or the blue book to tell you a car’s value? We tend to trust people more if they share our interest – and we trust them less if we suspect they’re looking out for themselves.

A great way to gain people’s trust is by proving that you’re a member of their community. Sharing group identity acts as a proxy for sharing values. The “Challenge accepted” meme accomplished that beautifully. It’s like using slang – it reinforces your status as an insider. Redditors heard the message: “I’m one of you.” He put that to good use.

## Our Tribal Emotions (Pathos)

After establishing his credibility as an insider, Kenny appealed to an incredibly powerful emotion to get them to act: group loyalty. When groups of people get compared to their rivals, it creates an us-versus-them mentality. The competition angle rallied atheists on Reddit into a stronger, more unified group.

And the more atheist redditors rallied together, the stronger the social proof dynamic became. When we’re in a group, we tend to watch other people for cues about how to behave. As redditors saw other people commenting, upvoting the post, and liking the SSA’s page, it influenced their behavior. People got the impression: “This is what it atheists on Reddit are doing.” As part of that group, they felt moved to behave the same way.

Kenny’s post inspired group pride, anger at cultural opponents, and the desire to fit in – emotions that motivate us to act. But that motivation needed direction.

## Giving a Direction (Logos)

Have you ever felt that you wanted to make a difference, but just didn’t know how to do it? Without direction, all that energy just sputters out. Telling people to “eat healthier” is overwhelming and vague, but saying “switch to 1% milk” is specific and helpful.

Kenny gave everyone a simple, concrete task: go click “like” on the Secular Student Alliance’s page. He had everyone share his big vision: to get the Secular Student Alliance as many “likes” as the Campus Crusade for Christ page. He even provided a link to the SSA’s Facebook page. The direction was clear.

It all fit together.

## Can we do this again?

We never know for sure whether a meme will explode.

But we’ll be more likely to go viral if we pay attention to what works. If you’re interested, I recommend Chip and Dan Heath’s books Made to Stick and Switch. Kenny managed to use psychology techniques without meaning to, but we can be more deliberate with our efforts. (Be careful fostering us-versus-them feelings. Competition is all well and good, but actual hostility is dangerous.)

There might seem like a lot of it boils down to luck. But as Richard Wiseman found, capitalizing on “luck” is really a skill. The Secular Student Alliance prepared by generating student leaders who were enthusiastic to help us out. When we spotted the opportunity we posted like madmen, and even  hosted an “Ask Us Anything” to interact with the community. And yes, Kenny did a fantastic job.

For such a quick image, it had a lot going for it. It’s not exactly Cicero orating in the Roman Senate, but it was damn good rhetoric in its own way. Forget a thousand words, that picture was worth 12,000 Facebook fans.

## Overcoming The Curse of Knowledge

What is the Curse of Knowledge, and how does it apply to science education, persuasion, and communication? No, it’s not a reference to the Garden of Eden story. I’m referring to a particular psychological phenomenon that can make our messages backfire if we’re not careful.

Communication isn’t a solo activity; it involves both you and the audience. Writing a diary entry is a great way to sort out thoughts, but if you want to be informative and persuasive to others, you need to figure out what they’ll understand and be persuaded by. A common habit is to use ourselves as a mental model – assuming that everyone else will laugh at what we find funny, agree with what we find convincing, and interpret words the way we use them. The model works to an extent – especially with people similar to us – but other times our efforts fall flat. You can present the best argument you’ve ever heard, only to have it fall on dumb – sorry, deaf – ears.

That’s not necessarily your fault – maybe they’re just dense! Maybe the argument is brilliant! But if we want to communicate successfully, pointing fingers and assigning blame is irrelevant. What matters is getting our point across, and we can’t do it if we’re stuck in our head, unable to see things from our audience’s perspective. We need to figure out what words will work.

Unfortunately, that’s where the Curse of Knowledge comes in. In 1990, Elizabeth Newton did a fascinating psychology experiment: She paired participants into teams of two: one tapper and one listener. The tappers picked one of 25 well-known songs and would tap out the rhythm on a table. Their partner – the designated listener – was asked to guess the song. How do you think they did?

Not well. Of the 120 songs tapped out on the table, the listeners only guessed 3 of them correctly – a measly 2.5 percent. But get this: before the listeners gave their answer, the tappers were asked to predict how likely their partner was to get it right. Their guess? Tappers thought their partners would get the song 50 percent of the time. You know, only overconfident by a factor of 20. What made the tappers so far off?

They lost perspective because they were “cursed” with the additional knowledge of the song title. Chip and Dan Heath use the story in their book Made to Stick to introduce the term:

“The problem is that tappers have been given knowledge (the song title) that makes it impossible for them to imagine what it’s like to lack that knowledge. When they’re tapping, they can’t imagine what it’s like for the listeners to hear isolated taps rather than a song. This is the Curse of Knowledge. Once we know something, we find it hard to imagine what it was like not to know it. Our knowledge has “cursed” us. And it becomes difficult or us to share our knowledge with others, because we can’t readily re-create our listeners’ state of mind.”

So it goes with communicating complex information. Because we have all the background knowledge and understanding, we’re overconfident that what we’re saying is clear to everyone else. WE know what we mean! Why don’t they get it? It’s tough to remember that other people won’t make the same inferences, have the same word-meaning connections, or share our associations.

It’s particularly important in science education. The more time a person spends in a field, the more the field’s obscure language becomes second nature. Without special attention, audiences might not understand the words being used – or worse yet, they might get the wrong impression.

Over at the American Geophysical Union blog, Callan Bentley gives a fantastic list of Terms that have different meanings for scientists and the public.

What great examples! Even though the scientific terms are technically correct in context, they’re obviously the wrong ones to use when talking to the public about climate change. An inattentive scientist could know all the material but leave the audience walking away with the wrong message.

We need to spend the effort to phrase ideas in a way the audience will understand. Is that the same as “dumbing down” a message? After all, complicated ideas require complicated words and nuanced answers, right? Well, no. A real expert on a topic can give a simple distillation of material, identifying the core of the issue. Bentley did an outstanding job rephrasing technical, scientific terms in a way that conveys the intended message to the public.

That’s not dumbing things down, it’s showing a mastery of the concepts. And he was able to do it by overcoming the “curse of knowledge,” seeing the issue from other people’s perspective. Kudos to him – it’s an essential part of science education, and something I really admire.