A Pretty-Good Mathematical Model of Perfectionism

I struggle with perfectionism. Well, not so much “struggle with” — I’m f*cking great at it. It comes naturally.

There are some upsides, but perfectionism is also associated with anxiety, depression, procrastination, and damaged relationships. Perhaps you, like I, have spent far too much time and emotional energy making sure that an email had the right word choice, had no typos, didn’t reuse a phrase in successive sentences/paragraphs, and closed with the ‘correct’ sign-off. (‘Best,’ is almost always optimal, by the way).

“If I couldn’t do something that rated 10 out of 10 — or at least close to that — I didn’t want to do it at all. Being a perfectionist was an ongoing source of suffering and unhappiness for me … Unfortunately, many of us have been conditioned to hold ourselves to impossible standards. This is a stressful mind state to live in, that’s for sure.” ~ Tony Bernard J.D.

The topic of perfectionism confused me for years. Of course you want things to be perfect; why would you ever actively want something to be worse? However, there’s way more to it than that: It’s a complex interplay between effort, time, motivation, and expectations.

Far too many self-help recommendations essentially said “Be ok with mediocrity!” which… did not speak to me, to say the least.

To better understand the concept, I went through a number of books and papers before building a quasi-mathematical model. You know, like ya’do.

I’ve come to see perfectionism as a mindset with a particular calibration between the quality of your work and your emotional reaction — with decreased sensitivity to marginal differences in lower-quality work and increasing sensitivity as the quality goes up.

graphs

  • In a “Balanced” mindset, you become happier in linear proportion to how much better your work is going. (y = x)
  • In a “Satisficing” mindset — taking a pass/fail test, for example — you care about whether something is “good enough”. Most of your emotional variance comes as you approach and meet that threshold.  ( e^x / (1+e^x) )
  • In a Perfectionist mindset, the relationship between quality and emotion is polynomial. You feel almost equally bad about scoring a 40% on a test vs. a 65%, but the difference between a 90% and 93% looms large. (y = x^7)

Looking at the model, I realized it could explain a number of experiences I’d had.


Why even small tasks seem daunting to a perfectionist

A common experience with a perfectionist mindset is having trouble ‘letting go’ of a project — we want to keep tinkering with it, improving it, and never feel quite comfortable moving on.  (I don’t want to say how long this draft sat around.)

This make sense given the model:

HappyEnough

When I think about clicking ‘send’ or ‘post’ before I’ve checked for typos, before I’ve reread everything, before considering where it might be wrong or unclear… it just feels, well, WRONG. I’m not yet happy with it and have trouble declaring it done.

Apart from requiring more time and effort, this can make even seemingly trivial tasks feel daunting. Internally, if you know that a short email will take an hour and a half it’s going to loom large even if you have trouble explaining quite why such a small thing is making you feel overwhelmed.


What’s helped me: A likely culprit is overestimating the consequences of mistakes. One solution is to be concrete and write down what you expect to happen if it turns out you have a typo, miss a shot, or bomb a test. Sometimes all it takes to readjust is examining those expectations consciously. Other times you’ll need to experience the ‘failure’, at which point you can compare it to your stated expectations.


Why perfectionists give up on hobbies and tasks easily

Another way to look at this is: if you don’t expect to reach high standards, a project just doesn’t seem worth doing.

AdequateResults

The result is a kind of min-max of approach to life: If you can’t excel, don’t bother spending time on it.

That’s not necessarily a bad thing!

However, we don’t always have control. In my nonprofit communications career, I sometimes got assigned to write press releases on topics that *might* get attention, but which seemed not newsworthy to me. It may have still been worth the few hours of my time in case it grabbed a reporter’s eye. It was important to keep my job. But I had so. much. trouble. getting myself to do the work.

Even in the personal realm, picking up a new hobby is made difficult. If it doesn’t seem like you’re going to be amazing at it, the hobby as a whole loses its luster.


What’s helped me: A big problem for me has been overlooking the benefits gained from so-called “failure”. Once I start to factor in e.g. how much I expect to learn (so that I can do better in the future) I end up feeling much better about giving things a shot.


Why procrastination (and anxiety) are common

At a granular scale, the problem becomes worse. Rather than “How good do I expect to feel at the end of this?” our emotional reaction is probably trained by the in-the-moment “How much happier do I expect to feel as a result of one more bit of work?”

In other words, we can view the derivative/slope of these graphs as motivation:

MotivationCurves

With a perfectionist mindset, the bigger and further away a goal is, the more difficult it will be to feel motivated in the moment.  For much of the time, we’re trying to push ourselves to work without getting any internal positive reinforcement.

This is a particular issue in the Effective Altruism movement where the goal is to *checks notes* Save the World. Also, to (“Figure out how to do the most good, and then do it.”)

It’s true that as a perfectionist nears their goal, they’re extremely motivated! But that also means that the stakes are very high for every decision and every action.  …Which is a recipe for anxiety. Terrific.


What’s helped me: To the extent that I can, I find that breaking tasks into pieces helps. If I think of my goal as “Save the World”, another day of work won’t feel very important. But a goal of “Finish reading another research paper” is something I can make real progress on in a day!


All models are wrong, but some are useful

This framework isn’t perfect. Neither is this writeup. (I’m hyper-aware.) But this idea has been in my head, in my drafts folder, and unfinished for months. Rather than give in to the sense that I “should” keep working on it, I’m going to try following my own advice. I’m remembering that:

  • I’ve clarified my thinking a ton by writing everything down.
  • The consequences of a sloppy post in are minimal in the big scheme of things.
  • This isn’t supposed to be my final conclusion – it’s one step on the path

Even if it’s not perfect, perhaps the current iteration of this framework can help you understand me, yourself, or perfectionists in your life.

I used to have this “DONE IS BETTER THAN PERFECT” poster draped over a chair in my office. I never got around to hanging it up, but honestly? It seems better that way.

Poster

Articles/books I found helpful:

The-Perfectionist-Script-for-self-defeat by David Burns (pdf)

When Perfect Isn’t Good Enough by Martin M. Antony & Richard P. Swinson

Mastering the Art of Quitting by Peg Streep & Alan Bernstein

Better By Mistake by Alina Tugend

The Procrastination Equation by Piers Steel

Which Cognitive Bias is Making NFL Coaches Predictable?

In football, it pays to be unpredictable (although the “wrong way touchdown” might be taking it a bit far.) If the other team picks up on an unintended pattern in your play calling, they can take advantage of it and adjust their strategy to counter yours. Coaches and their staff of coordinators are paid millions of dollars to call plays that maximize their team’s talent and exploit their opponent’s weaknesses.

That’s why it surprised Brian Burke, formerly of AdvancedNFLAnalytics.com (and now hired by ESPN) to see a peculiar trend: football teams seem to rush a remarkably high percent on 2nd and 10 compared to 2nd and 9 or 11.

What’s causing that?

His insight was that 2nd and 10 disproportionately followed an incomplete pass. This generated two hypotheses:

  1. Coaches (like all humans) are bad at generating random sequences, and have a tendency to alternate too much when they’re trying to be genuinely random. Since 2nd and 10 is most likely the result of a 1st down pass, alternating would produce a high percent of 2nd down rushes.
  2. Coaches are suffering from the ‘small sample fallacy’ and ‘recency bias’, overreacting to the result of the previous play. Since 2nd and 10 not only likely follows a pass, but a failed pass, coaches have an impulse to try the alternative without realizing they’re being predictable.

These explanations made sense to me, and I wrote about phenomenon a few years ago. But now that I’ve been learning data science, I can dive deeper into the analysis and add a hypothesis of my own.

The following work is based on the play-by-play data for every NFL game from 2002 through 2012, which Brian kindly posted. I spend some time processing it to create variables like Previous Season Rushing %, Yards per Pass, Yards Allowed per Pass by Defense, and QB Completion percent. The Python notebooks are available on my GitHub, although the data files were too large to host easily.

Irrationality? Or Confounding Variables?

Since this is an observational study rather than a randomized control trial, there are bound to be confounding variables. In our case, we’re comparing coaches’ play calling on 2nd down after getting no yards on their team’s 1st down rush or pass. But those scenarios don’t come from the same distribution of game situations.

A number of variables could be in play, some exaggerating the trend and others minimizing it. For example, teams that passed for no gain on 1st down (resulting in 2nd and 10) have a disproportionate number of inaccurate quarterbacks (the left graph). These teams with inaccurate quarterbacks are more likely to call rushing plays on 2nd down (the right graph). Combine those factors, and we don’t know whether any difference in play calling is caused by the 1st down play type or the quality of quarterback.

qb_completion_confound

The classic technique is to train a regression model to predict the next play call, and judge a variable’s impact by the coefficient the model gives that variable.  Unfortunately, models that give interpretable coefficients tend to treat each variables as either positively or negatively correlated with the target – so time remaining can’t be positively correlated with a coach calling running plays when the team is losing and negatively correlated when the team is winning. Since the relationships in the data are more complicated, we needed a model that can handle it.

I saw my chance to try a technique I learned at the Boston Data Festival last year: Inverse Probability of Treatment Weighting.

In essence, the goal is to create artificial balance between your ‘treatment’ and ‘control’ groups — in our case, 2nd and 10 situations following 1st down passes vs. following 1st down rushes. We want to take plays with under-represented characteristics and ‘inflate’ them by pretending they happened more often, and – ahem – ‘deflate’ the plays with over-represented features.

To get a single metric of how over- or under-represented a play is, we train a model (one that can handle non-linear relationship better) to take each 2nd down play’s confounding variables as input – score, field position, QB quality, etc – and tries to predict whether the 1st down play was a rush or pass. If, based on the confounding variables, the model predicts the play was 90% likely to be after a 1st down pass – and it was – we decide the play probably has over-represented features and we give it less weight in our analysis. However, if the play actually followed a 1st down rush, it must have under-represented features for the model to get it so wrong. Accordingly, we decide to give it more weight.

After assigning each play a new weight to compensate for its confounding features (using Kfolds to avoid training the model on the very plays it’s trying to score), the two groups *should* be balanced. It’s as though we were running a scientific study, noticed that our control group had half as many men as the treatment group, and went out to recruit more men. However, since that isn’t an option, we just decided to count the men twice.

Testing our Balance

Before processing, teams that rushed on 1st down for no gain were disproportionately likely to be teams with the lead. After the re-weighting process, the distributions are far much more similar:

lead_distribution_split

Much better! They’re not all this dramatic, but lead was the strongest confounding factor and the model paid extra attention to adjust for it.

It’s great that the distributions look more similar, but that’s qualitative. To do a quantitative diagnostic, we can take the standard difference in means, recommended as a best practice in a 2015 paper by Peter C. Austin and Elizabeth A. Stuart titled “Moving towards best practice when using inverse probability of treatment weighting (IPTW) using the propensity score to estimate causal treatment effects in observational studies“.

For each potential confounding variable, we take the difference in means between plays following 1st down passes and 1st down rushes and adjust for their combined variance. A high standard difference of means indicates that our two groups are dissimilar, and in need of balancing. The standardized differences had a max of around 47% and median of 7.5% before applying IPT-weighting, which reduced the differences to 9% and 3.1%, respectively.

standard_difference_means

Actually Answering Our Question

So, now that we’ve done what we can to balance the groups, do coaches still call rushing plays on 2nd and 10 more often after 1st down passes than after rushes? In a word, yes.

playcall_split

In fact, the pattern is even stronger after controlling for game situation. It turns out that the biggest factor was the score (especially when time was running out.) A losing team needs to be passing the ball more often to try to come back, so their 2nd and 10 situations are more likely to follow passes on 1st down. If those teams are *still* calling rushing plays often, it’s even more evidence that something strange is going on.

Ok, so controlling for game situation doesn’t explain away the spike in rushing percent at 2nd and 10. Is it due to coaches’ impulse to alternate their play calling?

Maybe, but that can’t be the whole story. If it were, I would expect to see the trend consistent across different 2nd down scenarios. But when we look at all 2nd-down distances, not just 2nd and 10, we see something else:

all_yards_playcalling

If their teams don’t get very far on 1st down, coaches are inclined to change their play call on 2nd down. But as a team gains more yards on 1st down, coaches are less and less inclined to switch. If the team got six yards, coaches rush about 57% of the time on 2nd down regardless of whether they ran or passed last play. And it actually reverses if you go beyond that – if the team gained more than six yards on 1st down, coaches have a tendency to repeat whatever just succeeded.

It sure looks like coaches are reacting to the previous play in a predictable Win-Stay Lose-Shift pattern.

Following a hunch, I did one more comparison: passes completed for no gain vs. incomplete passes. If incomplete passes feel more like a failure, the recency bias would influence coaches to call more rushing plays after an incompletion than after a pass that was caught for no gain.

Before the re-weighting process, there’s almost no difference in play calling between the two groups – 43.3% vs. 43.6% (p=.88). However, after adjusting for the game situation – especially quarterback accuracy – the trend reemerges: in similar game scenarios, teams rush 44.4% of the time after an incomplete and only 41.5% after passes completed for no gain. It might sound small, but with 20,000 data points it’s a pretty big difference (p < 0.00005)

All signs point to the recency bias being the primary culprit.

Reasons to Doubt:

1) There are a lot of variables I didn’t control for, including fatigue, player substitutions, temperature, and whether the game clock was stopped in between plays. Any or all of these could impact the play calling.

2) Brian Burke’s (and my) initial premise was that if teams are irrationally rushing more often after incomplete passes, defenses should be able to prepare for this and exploit the pattern. Conversely, going against the trend should be more likely to catch the defense off-guard.

I really expected to find plays gaining more yards if they bucked the trends, but it’s not as clear as I would like.  I got excited when I discovered that rushing plays on 2nd and 10 did worse if the previous play was a pass – when defenses should expect it more. However, when I looked at other distances, there just wasn’t a strong connection between predictability and yards gained.

One possibility is that I needed to control for more variables. But another possibility is that while defenses *should* be able to exploit a coach’s predictability, they can’t or don’t. To give Brian the last words:

But regardless of the reasons, coaches are predictable, at least to some degree. Fortunately for offensive coordinators, it seems that most defensive coordinators are not aware of this tendency. If they were, you’d think they would tip off their own offensive counterparts, and we’d see this effect disappear.

Why Decision Theory Tells You to Eat ALL the Cupcakes

cupcakeImagine that you have a big task coming up that requires an unknown amount of willpower – you might have enough willpower to finish, you might not. You’re gearing up to start when suddenly you see a delicious-looking cupcake on the table. Do you indulge in eating it? According to psychology research and decision-theory models, the answer isn’t simple.

If you resist the temptation to eat the cupcake, current research indicates that you’ve depleted your stores of willpower (psychologists call it ego depletion), which causes you to be less likely to have the willpower to finish your big task. So maybe you should save your willpower for the big task ahead and eat it!

…But if you’re convinced already, hold on a second. How easily you give in to temptation gives evidence about your underlying strength of will. After all, someone with weak willpower will find the reasons to indulge more persuasive. If you end up succumbing to the temptation, it’s evidence that you’re a person with weaker willpower, and are thus less likely to finish your big task.

How can eating the cupcake cause you to be more likely to succeed while also giving evidence that you’re more likely to fail?

Conflicting Decision Theory Models

The strangeness lies in the difference between two conflicting models of how to make decisions. Luke Muehlhauser describes them well in his Decision Theory FAQ:

This is not a “merely verbal” dispute (Chalmers 2011). Decision theorists have offered different algorithms for making a choice, and they have different outcomes. Translated into English, the [second] algorithm (evidential decision theory or EDT) says “Take actions such that you would be glad to receive the news that you had taken them.” The [first] algorithm (causal decision theory or CDT) says “Take actions which you expect to have a positive effect on the world.”

The crux of the matter is how to handle the fact that we don’t know how much underlying willpower we started with.

Causal Decision Theory asks, “How can you cause yourself to have the most willpower?”

It focuses on the fact that, in any state, spending willpower resisting the cupcake causes ego depletion. Because of that, it says our underlying amount of willpower is irrelevant to the decision. The recommendation stays the same regardless: eat the cupcake.

Evidential Decision Theory asks, “What will give evidence that you’re likely to have a lot of willpower?”

We don’t know whether we’re starting with strong or weak will, but our actions can reveal that one state or another is more likely. It’s not that we can change the past – Evidential Decision Theory doesn’t look for that causal link – but our choice indicates which possible version of the past we came from.

Yes, seeing someone undergo ego depletion would be evidence that they lost a bit of willpower.  But watching them resist the cupcake would probably be much stronger evidence that they have plenty to spare.  So you would rather “receive news” that you had resisted the cupcake.

A Third Option

Each of these models has strengths and weaknesses, and a number of thought experiments – especially the famous Newcomb’s Paradox – have sparked ongoing discussions and disagreements about what decision theory model is best.

One attempt to improve on standard models is Timeless Decision Theory, a method devised by Eliezer Yudkowsky of the Machine Intelligence Research Institute.  Alex Altair recently wrote up an overview, stating in the paper’s abstract:

When formulated using Bayesian networks, two standard decision algorithms (Evidential Decision Theory and Causal Decision Theory) can be shown to fail systematically when faced with aspects of the prisoner’s dilemma and so-called “Newcomblike” problems. We describe a new form of decision algorithm, called Timeless Decision Theory, which consistently wins on these problems.

It sounds promising, and I can’t wait to read it.

But Back to the Cupcakes

For our particular cupcake dilemma, there’s a way out:

Precommit. You need to promise – right now! – to always eat the cupcake when it’s presented to you. That way you don’t spend any willpower on resisting temptation, but your indulgence doesn’t give any evidence of a weak underlying will.

And that, ladies and gentlemen, is my new favorite excuse for why I ate all the cupcakes.

How has Bayes’ Rule changed the way I think?

People talk about how Bayes’ Rule is so central to rationality, and I agree. But given that I don’t go around plugging numbers into the equation in my daily life, how does Bayes actually affect my thinking?
A short answer, in my new video below:

 

 

(This is basically what the title of this blog was meant to convey — quantifying your uncertainty.)

What Would a Rational Gryffindor Read?

In the Harry Potter world, Ravenclaws are known for being the smart ones. That’s their thing. In fact, that was really all they were known for. In the books, each house could be boiled down to one or two words: Gryffindors are brave, Ravenclaws are smart, Slytherins are evil and/or racist, and Hufflepuffs are pathetic loyal. (Giving rise to this hilarious Second City mockery.)

But while reading Harry Potter and the Methods of Rationality, I realized that there’s actually quite a lot of potential for interesting reading in each house. Ravenclaws would be interested in philosophy of mind, cognitive science, and mathematics; Gryffindors in combat, ethics, and democracy; Slytherins in persuasion, rhetoric, and political machination; and Hufflepuffs in productivity, happiness, and the game theory of cooperation.

And so, after much thought, I found myself knee-deep in my books recreating what a rationalist from each house would have on his or her shelf. I tried to match the mood as well as the content. Here they are in the appropriate proportions for a Facebook cover image so that you can display your pride both in rationality and in your chosen house (click to see each image larger, with a book list on the left):

Rationality Ravenclaw Library

Rationality Gryffindor Library

Rationality Slytherin Library

Rationality Hufflepuff Library

What do you think? I’m always open to book recommendations and suggestions for good fits. Which bookshelf fits you best? What would you add?

Spirituality and “skeptuality”

Is “rational” spirituality a contradiction in terms? In the latest episode of the Rationally Speaking podcast, Massimo and I try to pin down what people mean when they call themselves “spiritual,” what inspires spiritual experiences and attitudes, and whether spirituality can be compatible with a naturalist view of the world.

Are there benefits that skeptics and other secular people could possibly get from incorporating some variants on traditional spiritual practices — like prayer, ritual, song, communal worship, and so on — into their own lives?

We xamine a variety of attempts to do so, and ask: how well have such attempts worked, and do they come with any potential pitfalls for our rationality?

http://www.rationallyspeakingpodcast.org/show/rs55-spirituality.html

How to want to change your mind

New video blog: “How to Want to Change your Mind.”

This one’s full of useful tips to turn off your “defensive” instincts in debates, and instead cultivate the kind of fair-minded approach that’s focused on figuring out the truth, not on “winning” an argument.

A rational view of tradition

In my latest video blog I answer a listener’s question about why rationalists are more likely to abandon social norms like marriage, monogamy, standard gender roles, having children, and so on. And then I weigh in on whether that’s a rational attitude to take:

You’re such an essentialist!

My latest video blog is about essentialism, and why it’s damaging to your rationality — and your happiness.

My Little Pony: Reality is Magic!

(Cross-posted at 3 Quarks Daily)

You probably won’t be very surprised to hear that someone decided to reboot the classic 80’s My Little Pony cartoon based on a line of popular pony toys. After all, sequels and shout-outs to familiar brands have become the foundation of the entertainment industry. The new ‘n improved cartoon, called My Little Pony: Friendship is Magic, follows a nerdy intellectual pony named Twilight Sparkle, who learns about the magic of friendship through her adventures with the other ponies in Ponyville.

But you might be surprised to learn that My Little Pony: Friendship is Magic’s biggest accolades have come not from its target audience of little girls and their families, but from a fervent adult fanbase. I first heard of My Little Pony: Friendship is Magic from one of my favorite sources of intelligent pop culture criticism, The Onion’s AV Club, which gave the show an enthusiastic review last year. (I had my suspicions at first that the AV Club’s enthusiasm was meant to be ironic, but they insisted that the show wore down their defenses, and that it was “legitimately entertaining and lots of fun.” So either their appreciation of My Little Pony: Friendship is Magic is genuine, or irony has gotten way more poker-faced than I realized.)

And you might be even more taken aback to learn that many, if not most, of those adult My Little Pony: Friendship is Magic fans are men and that they’ve even coined a name for themselves: “Bronies.” At least, I was taken aback. In fact, my curiosity was sufficiently piqued that I contacted Purple Tinker, the person in charge of organizing the bronies’ seasonal convention in New York City. Purple Tinker was friendly and helpful, saying that he had read about my work in the skeptic/rationalist communities, and commended me as only a brony could: “Bravo – that’s very Twilight Sparkle of you!”

But when I finally sat down and watched the show, I realized that while Purple Tinker may be skeptic-friendly, the show he loves is not. The episode I watched, “Feeling Pinkie Keen,” centers on a pony named Pinkie Pie, who interprets the twitches in her tail and the itches on her flank as omens of some impending catastrophe, big or small. “Something’s going to fall!” Pinkie Pie shrieks, a few beats before Twilight Sparkle accidentally stumbles into a ditch. The other ponies accept her premonitions unquestioningly, but empirically-minded Twilight Sparkle is certain that Pinkie Pie’s successes are either a hoax or a coincidence. She’s detemined to get to the bottom of the matter, shadowing Pinkie Pie in secret to observe whether the premonitions disappear when there’s no appreciative audience around, and hooking Pinkie Pie up to what appears to be a makeshift MRI machine which Twilight Sparkle apparently has lying around her house, to see whether the premonitions are accompanied by any unusual brain activity.

Meanwhile, Twilight Sparkle is being more than a little snotty about how sure she is that she’s right, and how she just can’t wait to see the look on Pinkie Pie’s face when Pinkie Pie gets proven wrong. Which, of course, is intended to make it all the more enjoyable to the audience when — spoiler alert! — Twilight Sparkle’s investigations yield no answers, and Pinkie Pie’s premonitions just keep coming true. Finally, Twilight Sparkle admits defeat: “I’ve learned that there are some things you just can’t explain. But that doesn’t mean they’re not true. You just have to choose to believe.”

Nooo, Twilight Sparkle, no! You are a disgrace to empirical ponies everywhere. And I’m not saying that because Twilight Sparkle “gave in” and concluded that Pinkie Pie’s premonitions were real. After all, sometimes it is reasonable to conclude that an amazing new phenomenon is more likely to be real than a hoax, or a coincidence, or an exaggeration, etc. It depends on the strength of the evidence. Rather, I’m objecting to the fact that Twilight Sparkle seems to think that because she was unable to figure out how premonitions worked, that therefore science has failed.

Twilight Sparkle is an example of a Straw Vulcan, a character who supposedly represents the height of rationality and logic, but who ends up looking like a fool compared to other, less rational characters. That’s because the Straw Vulcan brand of rationality isn’t real rationality. It’s a gimpy caricature, crafted that way either because the writers want to make rationality look bad, or because they genuinely think that’s what rationality looks like. In a talk I gave at this year’s Skepticon IV conference, I described some characteristic traits of a Straw Vulcan, such as an inability to enjoy life or feel emotions, and an unwillingness to make any decisions without all the information. Now I can add another trait to my list, thanks to Twilight Sparkle: the attitude that if we can’t figure out the explanation, then there isn’t one.

Do you think it’s possible that anyone missed the anti-inquiry message?  Hard to imagine, given the fact that the skeptical pony seems mainly motivated by a desire to prove other people wrong and gloat in their faces, and given her newly-humbled admission that “sometimes you have to just choose to believe.” But just in case there was anyone in the audience who didn’t get it yet, the writers also included a scene in which Twilight Sparkle is only able to escape from a monster by jumping across a chasm – and she’s scared, but the other ponies urge her on by crying out, “Twilight Sparkle, take a leap of faith!”

And yes, of course, My Little Pony: Friendship is Magic is “just” a kids’ cartoon, and I can understand why people might be tempted to roll their eyes at me for taking its message seriously. I don’t know to what extent children internalize the messages of the movies, TV, books, and other media they consume. But I do know that there are plenty of messages that we, as a society, would rightfully object to if we found them in a kids’ cartoon – imagine if one of the ponies played dumb to win the favors of a boy-pony and then they both lived happily ever after. Or if an episode ended with Twilight Sparkle chirping, “I’ve learned you should always do whatever it takes to impress the cool ponies!” So why aren’t we just as intolerant of a show that tells kids: “You can either be an obnoxious skeptic, or you can stop asking questions and just have faith”?