Bet You Can’t Solve The Birthday Paradox

If you’ve heard of the Birthday Paradox and/or like math puzzles and/or want to know how it connects with Computational Genomics and the Seven Bridges of Konigsberg, this is for you.

blogger-image-1649123828

Math cake! By Sarah Lynn

The Birthday Paradox (which would be more accurately named The Birthday Somewhat Unintuitive Result) asks “How many people do you need in a group before there’s a 50% chance that at least two share a birthday?”

It’s easier to flip it around and ask “For a given number of people, what are the chances that NONE of them share a birthday?” This makes it simpler: each time we add another person to the group, we just need to calculate the number of “open” birthdays and multiply by the odds of our new member having one of those.

P(no shared birthdays for n people) =

If we just keep increasing n and calculating the probability, we find that with 23 people there’s a 50% chance of at least two people sharing a birthday.

(Ok, technically we answered “Given a number of people what probability…” and used brute force instead of “Given a probability what number of people…” but let’s ignore that; everyone else does.)

How about if we want to know the probability that no THREE people in a group share a birthday?

This variant is trickier, and has tripped up many smart people. Do you think you can solve it?

Give it a shot! I’ll talk about the solution after some context for why it matters and some graphics/tools made with Wolfram Mathematica. I started as a machine learning research scientist with Wolfram earlier this year and I’ve been really enjoying playing with the tools!


Birthdays, Bridges, de Bruijn Graphs, and… Bgenomics

This is more than an idle math puzzle; it’s related to a fascinating challenge in Computational Genomics. (It was actually a question on my homework in grad school.)

When we want to read a DNA sequence, it’s usually far too long for our machines to process all at once. Instead, we copy the sequence a bunch, slice the copies up, and gather all the pieces of a chosen, more manageable, length — say, 4 base pair “letters”.

In the end, we know every length-four subsection in our DNA sequence, we just don’t know their order. For example, if we had a length-7 sequence and took all the random chunks of 4, we might end up with

TACA, ATTA, GATT, TTAC

Now, as though it were a one-dimensional jigsaw puzzle, we try to piece them together. Each chunk of 4 will overlap with its neighbors by 3 letters: the chunk ATTA ends with TTA, so we know the next chunk must start with TTA. There’s only one which does — TTAC — so those pieces fit together as ATTAC. Now we look for a piece that begins with TAC, and so on.

gattaca

In our case, there’s only one unique way to arrange the five chunks:

GATTACA

As the sequences get longer, scientists turn to graph theory to find this arrangement. Treating each DNA chunk as an edge, we can form a directed graph indicating which chunks could follow (overlap with) each other.

This overlapping-ness graph is called a De Bruijn Graph, and if there’s a path which uses every edge exactly once, we’ve done it: we’ve found a way to order the DNA chunks and reconstruct the larger sequence!

Konigsberg_bridges

If this sounds a bit like the Seven Bridges of Konigsberg problem, there’s a reason — it’s the same issue. It’s as though we were walking from “overlap island” to “overlap island”, each time saying to ourselves “Ok, if I have a piece ending in GCA, what bridge-chunk could come next?” and hoping to find a path that uses every chunk.

Since Euler solved the Bridges of Konigsberg problem, this type of path — using every edge exactly once — is known as an Eulerian Path. The great thing is that we have really efficient algorithms to find them!


However, we can run into trouble if some of our “overlap” sections aren’t distinct. If a section of DNA is repeated, that’s ok — we can probably still find the Eulerian Path through the graph.

onerepeat

…But if a section of the original DNA repeats three times, we’re screwed. When the repeat is at least as long as our “overlap” we can no longer find a unique path — multiple Eulerian Paths exist.

Let’s take the sequence AACATCCCATGGCATTT, in which the phrase “CAT” repeats three times. Once we reach the “ACAT” chunk, we don’t know what comes next. Overlapping with chunk “CATT” would lead us straight to the end — leaving out many chunks — so we know that comes last. But the loops starting either CATG or CATC could come next.

triplerepeat

So, if we’re going to read a long DNA sequence, we might ask:
How many overlapping chunks of 4 can we take before there’s a 50% or a 5% or 1% chance that we see a triple-repeat which ruins our attempt to reconstruct the original?

This is where we return to our Birthday Paradox variant!


Back to the Birthday Problem

With our Birthday Problem, there are 365 different birthdays a person can have. With DNA chunks of 4, there are 64 different three-letter ways each chunk could end. If any three chunks have the same ending, we won’t know how to reconstruct our sequence.

As the chunks get longer, we have a much better chance of producing a unique Eulerian Path from our graph.

While we can’t move the Earth farther from the Sun (nor should we (probably)) to increase the number of possible birthdays in a year, we CAN use chunks larger than 4 and increase the number of ways each chunk can end. So if we know we’re sequencing a genome 100,000 letters long, how long do our chunks need to be in order for us to have a >99% chance of reconstructing it?

Since starting my job at Wolfram Research, I’ve been playing with their graph capabilities and put together this little interactive tool. It generates a random gene and shows the De Bruijn Graph when you take chunks of different lengths. It’s amazing how quickly a totally chaotic graph becomes more orderly!

(The cloud-deployment can be a bit sluggish, but give it a second after clicking a button. If you get the desktop version of Wolfram Mathematica you can play with things like this incredibly quickly. They’re so cool.)


The Answer

Heck if I know. Sorry.

I can get the right answer of about 88 by running simulations, but I didn’t manage to derive the general formula for my class.

Every time I’ve shown this question to a friend — and the first time I saw it on my Computational Genomics homework — the response has been “Oh, this is simple. You just… wait, no, that won’t work. We can just… well… hm.”

Stack Exchange confirmed my fears: it’s ugly and we typically try to find approximations. I was momentarily excited to find Dr. Math claim to solve it, but they’d made a mistake and other mathematicians corrected them.

This 2005 paper in The Journal of Statistical Planning and Inference by Anirban DasGupta provides an answer, but it’s way more involved than I expected:

Screen Shot 2020-09-04 at 5.59.04 PM

Why is this so ugly?

In the original version, there’s a unique situation — each person has different birthdays. But in our version, for 23 people:

  • each person could have distinct birthdays
  • one pair could share a birthday and the other 21 are distinct
  • two pairs could share birthdays and the other 19 are distinct
  • three pairs could share birthdays and the other 17 are distinct
  • eleven pairs could share birthdays and the last one is distinct

For each scenario, we need to calculate the number of ways it can occur, the probability of each, and how it impacts our chance of getting a triple. It’s a mess.

But enough of my complaining about an old homework problem I never solved and which I’m clearly over and never think about.

How did you approach the problem? Did you solve it? Let me know!


Personal note: In my continuing efforts against Perfectionism, I’m going to declare this done. It’s taken up real estate in my head long enough.

Forget 3-D Chess; Here’s My 1-D Chess Rules

3dchessChess is sometimes held up as the embodiment of strategy and brilliance — if you’re playing chess while your opponent is playing checkers, you’re out-thinking them. Those even smarter can play chess in higher dimensions, with 3-D chess often used as a metaphor for politics. (There’s even a 5-D chess game on Steam which looks mind-bending.)

But going the other direction, the existence of 5-D, 3-D, and 2-D chess made me wonder: is there a way 1-D chess could work? And be fun, that is.

I’m not the first to have this thought; many people have tried their hand at designing one-dimensional chess including the late great Martin Gardner. His approach was for each side to have a single King, Knight, and Rook at the ends of an eight-tile long board. With so few pieces and spaces it’s fairly easy to “solve” the game, mapping out every possible move the same way we can solve tic-tac-toe.

I set out to create 1-D Chess which kept the spirit of the game as much as possible. It was initially inspired by conversations with Brienne years ago about designing mobius chess (which is topologically identical to playing on a loop, but is *obviously* cooler.)

Values to Preserve

  1. Low complexity – Piece moves are simple, there are few rules
  2. High depth – Many games are possible, with a mix of strategy and tactics
  3. Full information – No fog of war, no hidden cards, no randomness
  4. Personalized openings – Different opening play/counter-play options to match your aesthetics and strengths.

The last one is contentious — I know many people bemoan the amount of memorization required to learn the various chess openings. Bobby Fischer even famously proposed Fischer Random Chess which randomized the back row each game, thus stripping the game down to a player’s ability to understand the situation and respond.

However, I happen to enjoy the way you can study various opening strategies and say “I prefer to use the Alapin Variation to counter the Accelerated Dragon Sicilian Defense — I hate ceding the middle of the board.” Being able to steer the game toward your preferred style before getting into tactical elements of the game is a key part of what makes a game feel like *chess* to me.

So, after a lot of brainstorming and a lot of rejected ideas — see the last section — I whittled it down to a few core concepts. Pictures are worth a thousand words (although I’m sure there are opportunities for arbitrage somewhere…) so here’s a screenshot of the game I started building in Tabletop Simulator:

My Proposal for 1D Chess

RingChessScreenShot

  1. Ring Board – 28 squares; the outside of a standard chess board
  2. 12 Pieces per side – 4 fewer pawns, but otherwise the same pieces
  3. Placement Control – Players take turns placing non-pawns in their region to set up

Ring Board

Look, nobody said it had to be a line segment. Since each square has exactly two neighbors and the entire board is connected, it counts as 1-D.  Put it into polar coordinates if you have to.

Using a 28-square ring allows us to keep the standard chess board, but it also allows much more depth of play without adding complexity to the rules. Like in 2-D chess, you can focus your attack on one side or the other, and you have the ability to try interrupting your opponent’s plans by striking and causing havoc on the other side of the fight.

12 Pieces, Simple Moves

Similarly, I stuck with the original pieces and kept their movement as close in spirit as I could:

  • Pawn: Move forward one or capture two spaces ahead, ignoring the square in front. Cannot turn around.
  • Bishop: Moves up to 6 spaces, 2 at a time (hopping over every other square).
  • Rook: Moves up to 3 spaces forward, 1 at a time. [EDIT: Because the Rooks slide instead of hop, they get stuck easily. My current solution is that they can move *though* the King.]
  • Knight: Jumps either 3 or 5 squares
  • Queen: Can move like the Rook or Bishop
  • King: Moves one square.

This move set creates parallels to the 2-D version: Bishops stay on their color, pawns can get locked together, and Knights have a unique move (5 squares) that not even the Queen has.

The moves themselves stay fairly simple, but allow the kind of interplay that I like in 2-D chess with pieces defending each other and getting in each other’s way.

Opening Placement

Each player has 12 opposite squares to start, with 2 on each end filled by pawns. The remaining 8 squares are up to the players to arrange.

Starting with White, the players take turns placing one of their pieces on an empty square between their pawns.

It’s up to you: You can choose to create an unbalanced attack with both Knights on one side, ready to jump over the pawns and storm the enemy. You can choose to put your Bishops on the inside, where they have an easier time of getting out, or on the outside so that the Rooks are the last line of defense to mop up any attacks. You can leave the King with the Queen — your strongest piece — or between two Rooks…

There are lots of possibilities which rely on how you enjoy playing and how your opponent seems to be setting up. While the complexity of this rule is low, it adds immense depth to the game and prevents it from being quite so easily “solved”.

By requiring the pawns to take up the outermost two spaces, initial move choices are limited to advancing a pawn or using a Knight to hop over them. Moving one pawn can give your Bishops or Queen a way to move through them and enter the fray.  This is all just like in the 2-D version in a way I find aesthetically very pleasing.

If you prefer to just focus on the tactical side of things, you can use the normal ordering or give both players mirrored random arrangements.

Ideas that I considered but didn’t use:

Here are some snippets of ideas that I had but rejected because the complexity/depth tradeoff wasn’t good enough, or the game strayed too far and stopped being recognizable as “Chess”.

  • Making pieces face a direction, limiting them to moving forward
    • Allowed to turn around if the square immediately in front of them is filled
    • Might allow rules that make it easier to capture pieces from the back
  • Pieces can only capture certain types of pieces (in either a rock-paper-scissors style or Stratego style)
  • Ranged attacks without moving
  • Allow pieces to swap with each other
    • Either upon landing on your own, or as a type of movement
  • Pieces that push or pull rather than capture
  • Pieces that move differently when next to certain others
    • Rooks launch pawns, for example
    • The Queen could move in the pattern of any piece in a contiguous chain with her
  • Different terrain
    • Mud tiles which must be stopped on
    • Rocky terrain which prevents knights from landing on it
  • Pieces spawn new pieces next to them as an action

What do you think? Ideas and opinions are welcome!

 

A Pretty-Good Mathematical Model of Perfectionism

I struggle with perfectionism. Well, not so much “struggle with” — I’m f*cking great at it. It comes naturally.

There are some upsides, but perfectionism is also associated with anxiety, depression, procrastination, and damaged relationships. Perhaps you, like I, have spent far too much time and emotional energy making sure that an email had the right word choice, had no typos, didn’t reuse a phrase in successive sentences/paragraphs, and closed with the ‘correct’ sign-off. (‘Best,’ is almost always optimal, by the way).

“If I couldn’t do something that rated 10 out of 10 — or at least close to that — I didn’t want to do it at all. Being a perfectionist was an ongoing source of suffering and unhappiness for me … Unfortunately, many of us have been conditioned to hold ourselves to impossible standards. This is a stressful mind state to live in, that’s for sure.” ~ Tony Bernard J.D.

The topic of perfectionism confused me for years. Of course you want things to be perfect; why would you ever actively want something to be worse? However, there’s way more to it than that: It’s a complex interplay between effort, time, motivation, and expectations.

Far too many self-help recommendations essentially said “Be ok with mediocrity!” which… did not speak to me, to say the least.

To better understand the concept, I went through a number of books and papers before building a quasi-mathematical model. You know, like ya’do.

I’ve come to see perfectionism as a mindset with a particular calibration between the quality of your work and your emotional reaction — with decreased sensitivity to marginal differences in lower-quality work and increasing sensitivity as the quality goes up.

graphs

  • In a “Balanced” mindset, you become happier in linear proportion to how much better your work is going. (y = x)
  • In a “Satisficing” mindset — taking a pass/fail test, for example — you care about whether something is “good enough”. Most of your emotional variance comes as you approach and meet that threshold.  ( e^x / (1+e^x) )
  • In a Perfectionist mindset, the relationship between quality and emotion is polynomial. You feel almost equally bad about scoring a 40% on a test vs. a 65%, but the difference between a 90% and 93% looms large. (y = x^7)

Looking at the model, I realized it could explain a number of experiences I’d had.


Why even small tasks seem daunting to a perfectionist

A common experience with a perfectionist mindset is having trouble ‘letting go’ of a project — we want to keep tinkering with it, improving it, and never feel quite comfortable moving on.  (I don’t want to say how long this draft sat around.)

This make sense given the model:

HappyEnough

When I think about clicking ‘send’ or ‘post’ before I’ve checked for typos, before I’ve reread everything, before considering where it might be wrong or unclear… it just feels, well, WRONG. I’m not yet happy with it and have trouble declaring it done.

Apart from requiring more time and effort, this can make even seemingly trivial tasks feel daunting. Internally, if you know that a short email will take an hour and a half it’s going to loom large even if you have trouble explaining quite why such a small thing is making you feel overwhelmed.


What’s helped me: A likely culprit is overestimating the consequences of mistakes. One solution is to be concrete and write down what you expect to happen if it turns out you have a typo, miss a shot, or bomb a test. Sometimes all it takes to readjust is examining those expectations consciously. Other times you’ll need to experience the ‘failure’, at which point you can compare it to your stated expectations.


Why perfectionists give up on hobbies and tasks easily

Another way to look at this is: if you don’t expect to reach high standards, a project just doesn’t seem worth doing.

AdequateResults

The result is a kind of min-max of approach to life: If you can’t excel, don’t bother spending time on it.

That’s not necessarily a bad thing!

However, we don’t always have control. In my nonprofit communications career, I sometimes got assigned to write press releases on topics that *might* get attention, but which seemed not newsworthy to me. It may have still been worth the few hours of my time in case it grabbed a reporter’s eye. It was important to keep my job. But I had so. much. trouble. getting myself to do the work.

Even in the personal realm, picking up a new hobby is made difficult. If it doesn’t seem like you’re going to be amazing at it, the hobby as a whole loses its luster.


What’s helped me: A big problem for me has been overlooking the benefits gained from so-called “failure”. Once I start to factor in e.g. how much I expect to learn (so that I can do better in the future) I end up feeling much better about giving things a shot.


Why procrastination (and anxiety) are common

At a granular scale, the problem becomes worse. Rather than “How good do I expect to feel at the end of this?” our emotional reaction is probably trained by the in-the-moment “How much happier do I expect to feel as a result of one more bit of work?”

In other words, we can view the derivative/slope of these graphs as motivation:

MotivationCurves

With a perfectionist mindset, the bigger and further away a goal is, the more difficult it will be to feel motivated in the moment.  For much of the time, we’re trying to push ourselves to work without getting any internal positive reinforcement.

This is a particular issue in the Effective Altruism movement where the goal is to *checks notes* Save the World. Also, to (“Figure out how to do the most good, and then do it.”)

It’s true that as a perfectionist nears their goal, they’re extremely motivated! But that also means that the stakes are very high for every decision and every action.  …Which is a recipe for anxiety. Terrific.


What’s helped me: To the extent that I can, I find that breaking tasks into pieces helps. If I think of my goal as “Save the World”, another day of work won’t feel very important. But a goal of “Finish reading another research paper” is something I can make real progress on in a day!


All models are wrong, but some are useful

This framework isn’t perfect. Neither is this writeup. (I’m hyper-aware.) But this idea has been in my head, in my drafts folder, and unfinished for months. Rather than give in to the sense that I “should” keep working on it, I’m going to try following my own advice. I’m remembering that:

  • I’ve clarified my thinking a ton by writing everything down.
  • The consequences of a sloppy post in are minimal in the big scheme of things.
  • This isn’t supposed to be my final conclusion – it’s one step on the path

Even if it’s not perfect, perhaps the current iteration of this framework can help you understand me, yourself, or perfectionists in your life.

I used to have this “DONE IS BETTER THAN PERFECT” poster draped over a chair in my office. I never got around to hanging it up, but honestly? It seems better that way.

Poster

Articles/books I found helpful:

The-Perfectionist-Script-for-self-defeat by David Burns (pdf)

When Perfect Isn’t Good Enough by Martin M. Antony & Richard P. Swinson

Mastering the Art of Quitting by Peg Streep & Alan Bernstein

Better By Mistake by Alina Tugend

The Procrastination Equation by Piers Steel

Finding Pi From Random Numbers

If I gave you 10,000 random numbers between 0 and 1, how precisely could you estimate pi? It’s Pi Approximation Day (22/7 in the European format), which seems like the perfect time to share some math!

When we’re modeling something complex and either can’t or don’t want to bother to find a closed-form analytic solution, we can find a way to get close to the answer with a Monte Carlo simulation. The more precisely we want to estimate the answer, the more simulations we would create.

One classic Monte Carlo approach to find pi is to treat our 10,000 numbers as the x and y coordinates of 5,000 points in a square between (0,0) and (1,1). If we draw a unit circle at (0,0), the percent of the points which land inside the circle gives us a rough estimate of the area – which should be pi / 4. Multiply by 4, and we have our estimate.

points

simulated mean:  3.1608
95% confidence interval: 3.115 3.206
confidence interval size: 0.091

This technique works, but it’s not the most precise. There’s a fair bit of variance – a 95% confidence interval is about .09 units wide, from 3.12 to 3.21. Can we do better?

Another way to find the quarter-circle’s area is to treat it as a function and take its average value. (Area is average height times width, the width is 1, so the area inside the quarter-circle is just its average height.) That would give us pi/4, so we multiply by 4 to get our estimate for pi.

y^2+x^2=1

y^2=1-x^2

f(x)=\sqrt{1-x^{2}}

We have 10,000 random numbers between 0 and 1; all we have to do is calculate f(x) for each and take the mean:

means
simulated mean:  3.1500
95% confidence interval: 3.133 - 3.167
confidence interval size: 0.0348

This gives us a more precise estimate;  the 95% confidence interval is less than half what it was!

But we can do better.

Antithetic Variates

What happens if we take our 10,000 random numbers and flip them around? They’re just a set of points uniformly distributed between 0 and 1, so (1-x) is also a set of points uniformly distributed between 0 and 1. If the expected value of f(x) is pi with variance 0.1, then the expected value of f(1-x) should also be pi with variance of 0.1.

So how does this help us? It looks like we just have two different ways to get the same level of precision.

Well, if f(x) is particularly high, then f(1-x) is going to be particularly low. By pairing each random number with its converse , we can offset some of the error and get an estimation more closely centered around the true mean. Taking the average of two distributions, each with the same expected value should still give us the same answer.

(This trick, known as using antithetic variates, doesn’t work with every function, but works here because the function f(x) always decreases as x increases.)

antithetic

simulated mean:  3.1389
95% confidence interval: 3.132 - 3.145
confidence interval size: 0.0131

Lo and behold, our 95% confidence interval has narrowed down to 0.013, still only using 10,000 random numbers!

To be fair, this only beats 22/7 about 30% of the time with 10,000 random simulations. Can we reliably beat the approximation without resorting to more simulations?

Control Variates

It turns out we can squeeze a bit more information out of those randomly generated numbers. If we know the exact expected value for a part of the function, we can be more deliberate about offsetting the variance. In this case, let’s use c(x)=x^2 as our “control variate function”, since we know that the average value of x^2 from 0 to 1 is exactly 1/3.

Where our simulated function was

f(x)=\sqrt{1-x^{2}}

now we add a term that will have an expected value of 0, but will help reduce variance:

f'(x)=\sqrt{1-x^{2}}+b(x^2-\frac{1}{3})

For each of our 10,000 random x’s, if x^2 is above average, we know that f(x) will probably be a bit *below* average, and we nudge it up. If x^2 is below average, we know f(x) is likely a bit high, and nudge it down. The overall expected value doesn’t change, but we’re compressing things even further toward the mean.

The constant ‘b’ in our offset term determines how much we ‘nudge’ our function, and is estimated based on how our control variate covaries with the target function:

\frac{Covariance(f(x), c(x))}{Variance(c(x))}

(In this case, b is about 2.9) Here’s what we get:

controlVariates

simulated mean:  3.1412
95% confidence interval: 3.1381 - 3.1443
confidence interval size: 0.0062

See how the offset flattens our new function (in orange) to be tightly centered around 3.14?

This is pretty darn good. Without resorting to more simulations, we reduced our 95% confidence interval to 0.006.  This algorithm gives a closer approximation to pi than 22/7 about 57% of the time.

If we’re not bound by the number of random numbers we generate, we can get as close as we want. With 100,000 points, our control variates technique has a 95% confidence interval of 0.002, and beats 22/7 about 98% of the time.

These days, as computing power gets cheaper, we can generate 100,000 or even 1,000,000 random numbers with no problem. That’s what makes simulations so versatile – we can find ways to simulate even incredibly complicated processes and unbounded functions, deciding how precise we need to be.

Happy Pi Approximation Day!

(You may ask, why is there a “Pi Approximation Day” and not a “Pi Simulation Day”? Well, according to Nick Bostrom, every day is Simulation Day. Probably.)

Which Cognitive Bias is Making NFL Coaches Predictable?

In football, it pays to be unpredictable (although the “wrong way touchdown” might be taking it a bit far.) If the other team picks up on an unintended pattern in your play calling, they can take advantage of it and adjust their strategy to counter yours. Coaches and their staff of coordinators are paid millions of dollars to call plays that maximize their team’s talent and exploit their opponent’s weaknesses.

That’s why it surprised Brian Burke, formerly of AdvancedNFLAnalytics.com (and now hired by ESPN) to see a peculiar trend: football teams seem to rush a remarkably high percent on 2nd and 10 compared to 2nd and 9 or 11.

What’s causing that?

His insight was that 2nd and 10 disproportionately followed an incomplete pass. This generated two hypotheses:

  1. Coaches (like all humans) are bad at generating random sequences, and have a tendency to alternate too much when they’re trying to be genuinely random. Since 2nd and 10 is most likely the result of a 1st down pass, alternating would produce a high percent of 2nd down rushes.
  2. Coaches are suffering from the ‘small sample fallacy’ and ‘recency bias’, overreacting to the result of the previous play. Since 2nd and 10 not only likely follows a pass, but a failed pass, coaches have an impulse to try the alternative without realizing they’re being predictable.

These explanations made sense to me, and I wrote about phenomenon a few years ago. But now that I’ve been learning data science, I can dive deeper into the analysis and add a hypothesis of my own.

The following work is based on the play-by-play data for every NFL game from 2002 through 2012, which Brian kindly posted. I spend some time processing it to create variables like Previous Season Rushing %, Yards per Pass, Yards Allowed per Pass by Defense, and QB Completion percent. The Python notebooks are available on my GitHub, although the data files were too large to host easily.

Irrationality? Or Confounding Variables?

Since this is an observational study rather than a randomized control trial, there are bound to be confounding variables. In our case, we’re comparing coaches’ play calling on 2nd down after getting no yards on their team’s 1st down rush or pass. But those scenarios don’t come from the same distribution of game situations.

A number of variables could be in play, some exaggerating the trend and others minimizing it. For example, teams that passed for no gain on 1st down (resulting in 2nd and 10) have a disproportionate number of inaccurate quarterbacks (the left graph). These teams with inaccurate quarterbacks are more likely to call rushing plays on 2nd down (the right graph). Combine those factors, and we don’t know whether any difference in play calling is caused by the 1st down play type or the quality of quarterback.

qb_completion_confound

The classic technique is to train a regression model to predict the next play call, and judge a variable’s impact by the coefficient the model gives that variable.  Unfortunately, models that give interpretable coefficients tend to treat each variables as either positively or negatively correlated with the target – so time remaining can’t be positively correlated with a coach calling running plays when the team is losing and negatively correlated when the team is winning. Since the relationships in the data are more complicated, we needed a model that can handle it.

I saw my chance to try a technique I learned at the Boston Data Festival last year: Inverse Probability of Treatment Weighting.

In essence, the goal is to create artificial balance between your ‘treatment’ and ‘control’ groups — in our case, 2nd and 10 situations following 1st down passes vs. following 1st down rushes. We want to take plays with under-represented characteristics and ‘inflate’ them by pretending they happened more often, and – ahem – ‘deflate’ the plays with over-represented features.

To get a single metric of how over- or under-represented a play is, we train a model (one that can handle non-linear relationship better) to take each 2nd down play’s confounding variables as input – score, field position, QB quality, etc – and tries to predict whether the 1st down play was a rush or pass. If, based on the confounding variables, the model predicts the play was 90% likely to be after a 1st down pass – and it was – we decide the play probably has over-represented features and we give it less weight in our analysis. However, if the play actually followed a 1st down rush, it must have under-represented features for the model to get it so wrong. Accordingly, we decide to give it more weight.

After assigning each play a new weight to compensate for its confounding features (using Kfolds to avoid training the model on the very plays it’s trying to score), the two groups *should* be balanced. It’s as though we were running a scientific study, noticed that our control group had half as many men as the treatment group, and went out to recruit more men. However, since that isn’t an option, we just decided to count the men twice.

Testing our Balance

Before processing, teams that rushed on 1st down for no gain were disproportionately likely to be teams with the lead. After the re-weighting process, the distributions are far much more similar:

lead_distribution_split

Much better! They’re not all this dramatic, but lead was the strongest confounding factor and the model paid extra attention to adjust for it.

It’s great that the distributions look more similar, but that’s qualitative. To do a quantitative diagnostic, we can take the standard difference in means, recommended as a best practice in a 2015 paper by Peter C. Austin and Elizabeth A. Stuart titled “Moving towards best practice when using inverse probability of treatment weighting (IPTW) using the propensity score to estimate causal treatment effects in observational studies“.

For each potential confounding variable, we take the difference in means between plays following 1st down passes and 1st down rushes and adjust for their combined variance. A high standard difference of means indicates that our two groups are dissimilar, and in need of balancing. The standardized differences had a max of around 47% and median of 7.5% before applying IPT-weighting, which reduced the differences to 9% and 3.1%, respectively.

standard_difference_means

Actually Answering Our Question

So, now that we’ve done what we can to balance the groups, do coaches still call rushing plays on 2nd and 10 more often after 1st down passes than after rushes? In a word, yes.

playcall_split

In fact, the pattern is even stronger after controlling for game situation. It turns out that the biggest factor was the score (especially when time was running out.) A losing team needs to be passing the ball more often to try to come back, so their 2nd and 10 situations are more likely to follow passes on 1st down. If those teams are *still* calling rushing plays often, it’s even more evidence that something strange is going on.

Ok, so controlling for game situation doesn’t explain away the spike in rushing percent at 2nd and 10. Is it due to coaches’ impulse to alternate their play calling?

Maybe, but that can’t be the whole story. If it were, I would expect to see the trend consistent across different 2nd down scenarios. But when we look at all 2nd-down distances, not just 2nd and 10, we see something else:

all_yards_playcalling

If their teams don’t get very far on 1st down, coaches are inclined to change their play call on 2nd down. But as a team gains more yards on 1st down, coaches are less and less inclined to switch. If the team got six yards, coaches rush about 57% of the time on 2nd down regardless of whether they ran or passed last play. And it actually reverses if you go beyond that – if the team gained more than six yards on 1st down, coaches have a tendency to repeat whatever just succeeded.

It sure looks like coaches are reacting to the previous play in a predictable Win-Stay Lose-Shift pattern.

Following a hunch, I did one more comparison: passes completed for no gain vs. incomplete passes. If incomplete passes feel more like a failure, the recency bias would influence coaches to call more rushing plays after an incompletion than after a pass that was caught for no gain.

Before the re-weighting process, there’s almost no difference in play calling between the two groups – 43.3% vs. 43.6% (p=.88). However, after adjusting for the game situation – especially quarterback accuracy – the trend reemerges: in similar game scenarios, teams rush 44.4% of the time after an incomplete and only 41.5% after passes completed for no gain. It might sound small, but with 20,000 data points it’s a pretty big difference (p < 0.00005)

All signs point to the recency bias being the primary culprit.

Reasons to Doubt:

1) There are a lot of variables I didn’t control for, including fatigue, player substitutions, temperature, and whether the game clock was stopped in between plays. Any or all of these could impact the play calling.

2) Brian Burke’s (and my) initial premise was that if teams are irrationally rushing more often after incomplete passes, defenses should be able to prepare for this and exploit the pattern. Conversely, going against the trend should be more likely to catch the defense off-guard.

I really expected to find plays gaining more yards if they bucked the trends, but it’s not as clear as I would like.  I got excited when I discovered that rushing plays on 2nd and 10 did worse if the previous play was a pass – when defenses should expect it more. However, when I looked at other distances, there just wasn’t a strong connection between predictability and yards gained.

One possibility is that I needed to control for more variables. But another possibility is that while defenses *should* be able to exploit a coach’s predictability, they can’t or don’t. To give Brian the last words:

But regardless of the reasons, coaches are predictable, at least to some degree. Fortunately for offensive coordinators, it seems that most defensive coordinators are not aware of this tendency. If they were, you’d think they would tip off their own offensive counterparts, and we’d see this effect disappear.

Quantifying the Trump-iness of Political Sentences

trumpheadshotYou could say that Donald Trump has a… distinct way of speaking. He doesn’t talk the way other politicians do (even ignoring his accent), and the contrast between him and Clinton is pretty strong. But can we figure out what differentiates them? And then, can we find the most… Trump-ish sentence?

That was the challenge my friend Spencer posed to me as my first major foray into data science, the new career I’m starting. It was the perfect project: fun, complicated, and requiring me to learn new skills along the way.

To find out the answers, read on! The results shouldn’t be taken too seriously, but they’re amusing and give some insight into what might be important to each candidate and how they talk about the political landscape. Plus, it serves to demonstrate the data science techniques I’m learning for as a portfolio project.

If you want to play with the model yourself, I also put together an interactive javascript page for you: you can test your judgment compared to its predictions, browse the most Trumpish/Clintonish sentences and terms, and enter your own text for the model to evaluate.

screen-shot-2016-10-19-at-7-25-47-pm

To read about how the model works, I wrote a rundown with both technical and non-technical details below the tables and graphs. But without further ado, the results:

The Trump-iest and Clinton-est Sentences and Phrases from the 2016 Campaign:

Clinton Trump
Top sentence: “That’s why the slogan of my campaign is stronger together because I think if we work together and overcome the divisiveness that sometimes sets americans against one another and instead we make some big goals and I’ve set forth some big goals, getting the economy to work for everyone, not just those at the top, making sure we have the best education system from preschool through college and making it affordable and somp[sic] else.” — Presidential Candidates Debate

Predicted Clinton: 0.99999999999
Predicted Trump: 1.04761466567e-11

Frustratingly, I couldn’t download or embed the C-SPAN video for this clip, so here are two of the other top 5 Clinton-iest sentences:

Presidential Candidate Hillary Clinton Rally in Orangeburg, South Carolina

Presidential Candidate Hillary Clinton Economic Policy Address

Top sentence: “As you know, we have done very well with the evangelicals and with religion generally speaking, if you look at what’s happened with all of the races, whether it’s in south carolina, i went there and it was supposed to be strong evangelical, and i was not supposed to win and i won in a landslide, and so many other places where you had the evangelicals and you had the heavy christian groups and it was just — it’s been an amazing journey to have — i think we won 37 different states.” — Faith and Freedom Coalition Conference

Predicted Clinton: 4.29818403092e-11
Predicted Trump: 0.999999999957

Frustratingly, I couldn’t download or embed the C-SPAN video for this clip either, so here are two of the other top 5 Trump-iest sentences:

Presidential Candidate Donald Trump Rally in Arizona

Presidential Candidate Donald Trump New York Primary Night Speech

Top Terms

Term Multiplier
my husband 12.95
recession 10.28
attention 9.72
wall street 9.44
grateful 9.23
or us 8.39
citizens united 7.97
mother 7.20
something else 7.17
strategy 7.05
clear 6.81
kids 6.74
gun 6.69
i remember 6.51
corporations 6.51
learning 6.36
democratic 6.28
clean energy 6.24
well we 6.14
insurance 6.14
grandmother 6.12
experiences 6.00
progress 5.94
auto 5.90
climate 5.89
over again 5.85
often 5.80
a raise 5.71
about what 5.68
immigration reform 5.62
Term Multiplier
tremendous 14.57
guy 10.25
media 8.60
does it 8.24
hillary 8.15
politicians 8.00
almost 7.83
incredible 7.42
illegal 7.16
general 7.03
frankly 6.97
border 6.89
establishment 6.84
jeb 6.76
allowed 6.72
obama 6.48
poll 6.24
by the way 6.21
bernie 6.20
ivanka 6.09
japan 5.98
politician 5.96
nice 5.93
conservative 5.90
islamic 5.77
hispanics 5.76
deals 5.47
win 5.43
guys 5.34
believe me 5.32

Other Fun Results:

pronouns

Cherrypicked pairs of terms:

Clinton Trump
Term Multiplier Term Multiplier
president obama 3.27 obama 6.49
immigrants 3.40 illegal immigrants 4.87
clean energy 6.24 energy 1.97
the wealthy 4.21 wealth 2.11
learning 6.36 earning 1.38
muslims 3.46 the muslims 1.75
senator sanders 3.18 bernie 6.20

How the Model Works:

Defining the problem: What makes a sentence “Trump-y?”

I decided that the best way to quantify ‘Trump-iness’ of a sentence was to train a model to predict whether a given sentence was said by Trump or Clinton. The Trumpiest sentence will be the one that the predictive model would analyze and say “Yup, the chance this was Trump rather than Clinton is 99.99%”.

Along the way, with the right model, we can ‘look under the hood’ to see what factors into the decision.

Technical details:

The goal is to build a classifier that can distinguish between the candidate’s sentences optimizing for ROC_AUC, and allows us to extract meaningful/explainable coefficients.

Gathering and processing the data:

In order to train the model, I needed large bodies of text from each candidate. I ended up scraping transcripts from events on C-SPAN.org. Unfortunately, they’re uncorrected closed caption transcripts and contained plenty of typos and misattributions. On the other hand, they’re free.

I did a bit to clean up some recurring problems like the transcript starting every quote section with “Sec. Clinton:” or including descriptions like [APPLAUSE] or [MUSIC]. (Unfortunately, they don’t reliably mark the end of the music, and C-SPAN sometimes claims that Donald Trump is the one singing ‘You Can’t Always Get What You Want.’)

Technical details:

I ended up learning to use Python’s Beautiful Soup library to identify the list of videos C-SPAN considers campaign events by the candidates, find their transcripts, and grab only the parts they supposedly said. I learned to use some basic regular expressions to do the cleaning.

My scraping tool is up on github, and is actually configured to be able to grab transcripts for other people as well.

Converting the data into usable features

After separating the large blocks of text into sentences and then words, I had some decisions to make. In an effort to focus on interesting and meaningful content, I removed sentences that were too short or too long – “Thank you” comes up over and over, and the longest sentences tended to be errors in the transcription service. It’s a judgement call, but I wanted to keep half the sentences, which set cutoffs at 9 words and 150 words. 34,108 sentences remained.

A common technique in natural language processing is to remove the “stopwords” – common non-substantive words like articles (a, the), pronouns (you, we), and conjunctions (and, but). However, following James Pennebaker’s research, which found these words are surprisingly useful in predicting personality, I left them in.

Now we have what we need: sequences of words that the model can consider evidence of Trump-iness.

Technical details:

I used NLTK to tokenize the text into sentences, but wrote my own regular expressions to tokenize the words. I considered it important to keep contractions together and include single-character tokens, which the standard NLTK function wouldn’t have done.

I used a CountVectorizer from sklearn to extract ngrams and later selected the most important terms using a SelectFromModel with a Lasso Logistic Regression. It was a balance – more terms would typically improve accuracy, but water down the meaningfulness of each coefficient.

I tested using various additional features, like parts of speech and lemmas (using the fantastic Spacy library) and sentiment analysis (using the Textblob library) but found that they only provided marginal benefit and made the model much slower. Even just using 1-3 ngrams, I got 0.92 ROC_AUC.

Choosing & Training the Model

One of the most interesting challenges was avoiding overfitting. Without taking countermeasures, the model could look at a typo-riddled sentence like “Wev justv don’tv winv anymorev.” and say “Aha! Every single one of those words are unique to Donald Trump, therefore this is the most Trump-like sentence ever!”

I addressed this problem in two ways: the first is by using regularization, a standard machine learning technique that penalizes a model for using larger coefficients. As a result, the model is discouraged from caring about words like ‘justv’ which might only occur two times, since they would only help identify those couple sentences. On the other hand, a word like ‘frankly’ helps identify many, many sentences and is worth taking a larger penalty to give it more importance in the model.

The other technique was to use batch predictions – dividing the sentences into 20 chunks, and evaluating each chunk by only training on the other 19. This way, if the word ‘winv’ only appears in a single chunk, the model won’t see it in the training sentences and won’t be swayed. Only words that appear throughout the campaign have a significant impact in the model.

Technical details:

The model uses a logistic regression classifier because it produces very explainable coefficients. If that weren’t a factor, I might have tried a neural net or SVM (I wouldn’t expect a random forest to do well with such sparse data.) In order to set the regularization parameters for both the final classifier and for the feature-selection Lasso Logistic Regressor, I used sklearn’s cross-validated gridsearch object, optimizing for ROC_AUC.

During the prediction process, I used a stratified Kfold to divide the data in order to ensure each chunk would have the appropriate mix of Trump and Clinton sentences. It was tempting to treat the sentences more like a time series and only use past data in the predictions, but we want to consider how similar old sentences are to the whole corpus.

Interpreting and Visualizing the Results:

The model produced two interesting types of data: how likely the model thought each sentence was spoken by Trump or Clinton (how ‘Trumpish’ vs. ‘Clintonish’ it is), and how any particular term impacts those predicted odds. So if a sentence is predicted to be spoken by Trump with estimated 99.99% probability, the model considers it extremely Trumpish.

The term’s multipliers indicate how each word or phrase impacts the predicted odds. The model starts at 1:1 (50%/50%), and let’s say the sentence includes the word “incredible” – a Trump multiplier of 7.42. The odds are now 7.42 : 1, or roughly 88% in favor of Trump. If the model then sees the word “grandmother” – a Clinton multiplier of 6.12 – its estimated odds become 7.42 : 6.12, (or 1.12 : 1), roughly 55% Trump. Each term has a multiplying effect, so a 4x word and 2x word together have as much impact as an 8x word – not 6x.

Technical details:

In order to visualize the results, I spent a bunch of time tweaking the matplotlib package to generate a graph of coefficients, which I used for the pronouns above. I made sure to use a logarithmic scale, since the terms are multiplicative.

In addition, I decided to teach myself enough javascript to learn to use the D3 library – allowing interactive visualizations and the guessing game where players can try to figure out who said a given random sentence from the campaign trail. There are a lot of ways the code could be improved, but I’m pleased with how it turned out given that I didn’t know any D3 prior to this project.

An Atheist’s Defense of Rituals: Ceremonies as Traffic Lights

BarMitzvahThe idea of a coming-of-age ceremony has always been a bit strange to me as an atheist. Sure, I attended more than my fair share of Bat and Bar Mitzvahs in middle school. But it always struck me as odd for us to pretend that someone “became an adult” on a particular day, rather than acknowledging it was a gradual process of maturation over time. Why can’t we just all treat people as their maturity level deserves?

The same goes with weddings – does a couple’s relationship really change in a significant way marked by a ceremony? Or do two people gradually fall in love and grow committed to each other over time? Moving in with each other marks a discrete change, but what does “married” change about the relationship?

But my thinking has been evolving since reading this fantastic post about rituals by Brett and Kate McKay at The Art of Manliness. Not only do the rituals acknowledge a change, they use psychological and social reinforcement to help the individuals make the transition more fully:

One of the primary functions of ritual is to redefine personal and social identity and move individuals from one status to another: boy to man, single to married, childless to parent, life to death, and so on.

Left to follow their natural course, transitions often become murky, awkward, and protracted. Many life transitions come with certain privileges and responsibilities, but without a ritual that clearly bestows a new status, you feel unsure of when to assume the new role. When you simply slide from one stage of your life into another, you can end up feeling between worlds – not quite one thing but not quite another. This fuzzy state creates a kind of limbo often marked by a lack of motivation and direction; since you don’t know where you are on the map, you don’t know which way to start heading.

Just thinking your way to a new status isn’t very effective: “Okay, now I’m a man.” The thought just pings around inside your head and feels inherently unreal. Rituals provide an outward manifestation of an inner change, and in so doing help make life’s transitions and transformations more tangible and psychologically resonant.

Brett and Kate McKay cover a range of aspects of rituals, but I was particular struck by the game theory implications of these ceremonies. By coordinating society’s expectations in a very public manner, transition rituals act like traffic lights to make people feel comfortable and confident in their course of action.

The Value of Traffic Lights

Traffic lights are a common example in game theory. Imagine that you’re driving toward an unmarked intersection and see another car approaching from the right. You’re faced with a decision: do you keep going, or brake to a stop?

If you assume they’re going to keep driving, you want to stop and let them pass. If you’re wrong, you both lose time and there’s an awkward pause while you signal to each other to go.

If you assume they’re going to stop, you get to keep going and maintain your speed. Of course, if you’re wrong and they keep barreling forward, you risk a deadly accident.

Things go much more smoothly when there are clear street signs or, better yet, a traffic light coordinating everyone’s expectations.

Ceremonies as Traffic Lights

Now, misjudging a teenager’s maturity is unlikely to result in a deadly accident. But, with reduced stakes, the model still applies.

As a teen gets older, members of society don’t always know how to treat him – as a kid or adult. Each type of misaligned expectations is a different failure mode: If you treat him as a kid when he expected to be treated as an adult, he might feel resentful of the “overbearing adult”. If you treat him as an adult when he was expecting to be treated as a kid, he might not take responsibility for himself.

trafficlightA coming-of-age ritual acts like the traffic light to minimize those failure modes. At a Bar or Bat Mitzvah, members of society gather with the teenager and essentially publicly signal “Ok everyone, we’re switching our expectations… wait for it… Now!”

It’s important that the information is known by all to be known to all – what Steven Pinker calls common or mutual knowledge:

“In common knowledge, not only does A know x and B know x, but A knows that B knows x, and B knows that A knows x, and A knows that B knows that A knows x, ad infinitum.”

If you weren’t sure that the oncoming car could see their traffic light, it would be almost as bad as if there were no light at all. You couldn’t trust your green light because they might not stop. Not only do you need to know your role, but you need to know that everyone knows their role and trusts that you know yours… etc.

Public ceremonies gather everyone to one place, creating that common knowledge. The teenager knows that everyone expects him to act as an adult, society knows that he expects them to treat him as one, and everyone knows that those expectations are shared. Equipped with this knowledge, the teen can count on consistent social reinforcement to minimize awkwardness and help him adopt his new identity.

Obviously, these rituals are imperfect – Along with the socially-defined parts of identity, there are internal factors that make someone more or less ready to be an adult. Quite frankly, setting 13 as the age of adulthood is probably too young.

But that just means we should tweak the rituals to better fit our modern world. After all, we have precise engineering to set traffic light schedules, and it still doesn’t seem perfect (this XKCD comes to mind).

That’s what makes society and civilization powerful. We’re social creatures, and feel better when we feel comfortable in our identity – either as a child or adult, as single or married, as grieving or ready to move on. Transition rituals serve an important and powerful role in coordinating those identities.

We shouldn’t necessarily respect them blindly, but I definitely respect society’s rituals more after thinking this through.

To take an excerpt from a poem by Bruce Hawkins:

Three in the morning, Dad, good citizen
stopped, waited, looked left, right.
He had been driving nine hundred miles,
had nearly a hundred more to go,
but if there was any impatience
it was only the steady growl of the engine
which could just as easily be called a purr.

I chided him for stopping;
he told me our civilization is founded
on people stopping for lights at three in the morning.

The Matrix Meets Braid: Artificial Brains in Gunfights

superhotIt’s The Matrix meets Braid: a first-person shooter video game “where the time moves only when you move.” You can stare at the bullets streaking toward you as long as you like, but moving to dodge them causes the enemies and bullets to move forward in time as well.

The game is called SUPERHOT, and the designers describe it by saying “With this simple mechanic we’ve been able to create gameplay that’s not all about reflexes – the player’s main weapon is careful aiming and smart planning – while not compromising on the dynamic feeling of the game.”

Here’s the trailer:

I’ve always loved questions about what it would be like to distort time for yourself relative to the rest of the universe (and the potential unintended consequences, as we explored in discussing why The Flash is in a special hell.)

In Superhot, it’s not that you can distort time exactly – after all, whenever you take a step, your enemies get the same amount of time to take a step themselves. Instead, your brain is running as fast as it likes while (the rest of) your body remains in the same time stream as everything else.

And then it struck me: this might be close to the experience of an emulated brain housed in a regular-sized body.

Let’s say that, in the future, we artificially replicate/emulate human minds on computers. And let’s put an emulated human mind inside a physical, robotic body. The limits on how fast it can think are its hardware and its programming. As technology and processor speeds improve, the “person” could think faster and faster and would experience the outside world as moving slower and slower in comparison.

… but even though you might have a ridiculously high processing speed to think and analyze a situation, your physical body is still bound by the normal laws of physics. Moving your arms or legs requires moving forward in the same stream of time as everyone else. In order to, say, turn your head to look to your left and gather more information, you need to let time pass for your enemies, too.

Robin Hanson, professor of economics at George Mason University and author of Overcoming Bias, has put a lot of thought into the implications of whole-brain emulation. So I asked him:

Is Superhot what an emulated human would experience in a gunfight?

His reply:

An em could usually speed up its mind to deal with critical situations, though this would cost more per objective second. So a first-person shooter where time only moves when you do does move in the direction of letting the gamer experience action in an em world. Even better would be to let the gamer change the rate at which game-time seems to move, to have a limited gamer-time budget to spend, and to give other non-human game characters a similar ability.”

He’s right: thinking faster would require running more cycles per second, which takes resources. And yeah, you would need infinite processing speed to think indefinitely while the rest of the world was frozen. It would be more consistent to add a “mental cycle” budget that ran down at a constant rate from the gamer’s external point of view.

I don’t know about you, but I would buy that game! (Even if a multi-player mode would be impossible.)

Why Decision Theory Tells You to Eat ALL the Cupcakes

cupcakeImagine that you have a big task coming up that requires an unknown amount of willpower – you might have enough willpower to finish, you might not. You’re gearing up to start when suddenly you see a delicious-looking cupcake on the table. Do you indulge in eating it? According to psychology research and decision-theory models, the answer isn’t simple.

If you resist the temptation to eat the cupcake, current research indicates that you’ve depleted your stores of willpower (psychologists call it ego depletion), which causes you to be less likely to have the willpower to finish your big task. So maybe you should save your willpower for the big task ahead and eat it!

…But if you’re convinced already, hold on a second. How easily you give in to temptation gives evidence about your underlying strength of will. After all, someone with weak willpower will find the reasons to indulge more persuasive. If you end up succumbing to the temptation, it’s evidence that you’re a person with weaker willpower, and are thus less likely to finish your big task.

How can eating the cupcake cause you to be more likely to succeed while also giving evidence that you’re more likely to fail?

Conflicting Decision Theory Models

The strangeness lies in the difference between two conflicting models of how to make decisions. Luke Muehlhauser describes them well in his Decision Theory FAQ:

This is not a “merely verbal” dispute (Chalmers 2011). Decision theorists have offered different algorithms for making a choice, and they have different outcomes. Translated into English, the [second] algorithm (evidential decision theory or EDT) says “Take actions such that you would be glad to receive the news that you had taken them.” The [first] algorithm (causal decision theory or CDT) says “Take actions which you expect to have a positive effect on the world.”

The crux of the matter is how to handle the fact that we don’t know how much underlying willpower we started with.

Causal Decision Theory asks, “How can you cause yourself to have the most willpower?”

It focuses on the fact that, in any state, spending willpower resisting the cupcake causes ego depletion. Because of that, it says our underlying amount of willpower is irrelevant to the decision. The recommendation stays the same regardless: eat the cupcake.

Evidential Decision Theory asks, “What will give evidence that you’re likely to have a lot of willpower?”

We don’t know whether we’re starting with strong or weak will, but our actions can reveal that one state or another is more likely. It’s not that we can change the past – Evidential Decision Theory doesn’t look for that causal link – but our choice indicates which possible version of the past we came from.

Yes, seeing someone undergo ego depletion would be evidence that they lost a bit of willpower.  But watching them resist the cupcake would probably be much stronger evidence that they have plenty to spare.  So you would rather “receive news” that you had resisted the cupcake.

A Third Option

Each of these models has strengths and weaknesses, and a number of thought experiments – especially the famous Newcomb’s Paradox – have sparked ongoing discussions and disagreements about what decision theory model is best.

One attempt to improve on standard models is Timeless Decision Theory, a method devised by Eliezer Yudkowsky of the Machine Intelligence Research Institute.  Alex Altair recently wrote up an overview, stating in the paper’s abstract:

When formulated using Bayesian networks, two standard decision algorithms (Evidential Decision Theory and Causal Decision Theory) can be shown to fail systematically when faced with aspects of the prisoner’s dilemma and so-called “Newcomblike” problems. We describe a new form of decision algorithm, called Timeless Decision Theory, which consistently wins on these problems.

It sounds promising, and I can’t wait to read it.

But Back to the Cupcakes

For our particular cupcake dilemma, there’s a way out:

Precommit. You need to promise – right now! – to always eat the cupcake when it’s presented to you. That way you don’t spend any willpower on resisting temptation, but your indulgence doesn’t give any evidence of a weak underlying will.

And that, ladies and gentlemen, is my new favorite excuse for why I ate all the cupcakes.

How has Bayes’ Rule changed the way I think?

People talk about how Bayes’ Rule is so central to rationality, and I agree. But given that I don’t go around plugging numbers into the equation in my daily life, how does Bayes actually affect my thinking?
A short answer, in my new video below:

 

 

(This is basically what the title of this blog was meant to convey — quantifying your uncertainty.)

%d bloggers like this: