Quantifying the Trump-iness of Political Sentences

trumpheadshotYou could say that Donald Trump has a… distinct way of speaking. He doesn’t talk the way other politicians do (even ignoring his accent), and the contrast between him and Clinton is pretty strong. But can we figure out what differentiates them? And then, can we find the most… Trump-ish sentence?

That was the challenge my friend Spencer posed to me as my first major foray into data science, the new career I’m starting. It was the perfect project: fun, complicated, and requiring me to learn new skills along the way.

To find out the answers, read on! The results shouldn’t be taken too seriously, but they’re amusing and give some insight into what might be important to each candidate and how they talk about the political landscape. Plus, it serves to demonstrate the data science techniques I’m learning for as a portfolio project.

If you want to play with the model yourself, I also put together an interactive javascript page for you: you can test your judgment compared to its predictions, browse the most Trumpish/Clintonish sentences and terms, and enter your own text for the model to evaluate.

screen-shot-2016-10-19-at-7-25-47-pm

To read about how the model works, I wrote a rundown with both technical and non-technical details below the tables and graphs. But without further ado, the results:

The Trump-iest and Clinton-est Sentences and Phrases from the 2016 Campaign:

Clinton Trump
Top sentence: “That’s why the slogan of my campaign is stronger together because I think if we work together and overcome the divisiveness that sometimes sets americans against one another and instead we make some big goals and I’ve set forth some big goals, getting the economy to work for everyone, not just those at the top, making sure we have the best education system from preschool through college and making it affordable and somp[sic] else.” — Presidential Candidates Debate

Predicted Clinton: 0.99999999999
Predicted Trump: 1.04761466567e-11

Frustratingly, I couldn’t download or embed the C-SPAN video for this clip, so here are two of the other top 5 Clinton-iest sentences:

Presidential Candidate Hillary Clinton Rally in Orangeburg, South Carolina

Presidential Candidate Hillary Clinton Economic Policy Address

Top sentence: “As you know, we have done very well with the evangelicals and with religion generally speaking, if you look at what’s happened with all of the races, whether it’s in south carolina, i went there and it was supposed to be strong evangelical, and i was not supposed to win and i won in a landslide, and so many other places where you had the evangelicals and you had the heavy christian groups and it was just — it’s been an amazing journey to have — i think we won 37 different states.” — Faith and Freedom Coalition Conference

Predicted Clinton: 4.29818403092e-11
Predicted Trump: 0.999999999957

Frustratingly, I couldn’t download or embed the C-SPAN video for this clip either, so here are two of the other top 5 Trump-iest sentences:

Presidential Candidate Donald Trump Rally in Arizona

Presidential Candidate Donald Trump New York Primary Night Speech

Top Terms

Term Multiplier
my husband 12.95
recession 10.28
attention 9.72
wall street 9.44
grateful 9.23
or us 8.39
citizens united 7.97
mother 7.20
something else 7.17
strategy 7.05
clear 6.81
kids 6.74
gun 6.69
i remember 6.51
corporations 6.51
learning 6.36
democratic 6.28
clean energy 6.24
well we 6.14
insurance 6.14
grandmother 6.12
experiences 6.00
progress 5.94
auto 5.90
climate 5.89
over again 5.85
often 5.80
a raise 5.71
about what 5.68
immigration reform 5.62
Term Multiplier
tremendous 14.57
guy 10.25
media 8.60
does it 8.24
hillary 8.15
politicians 8.00
almost 7.83
incredible 7.42
illegal 7.16
general 7.03
frankly 6.97
border 6.89
establishment 6.84
jeb 6.76
allowed 6.72
obama 6.48
poll 6.24
by the way 6.21
bernie 6.20
ivanka 6.09
japan 5.98
politician 5.96
nice 5.93
conservative 5.90
islamic 5.77
hispanics 5.76
deals 5.47
win 5.43
guys 5.34
believe me 5.32

Other Fun Results:

pronouns

Cherrypicked pairs of terms:

Clinton Trump
Term Multiplier Term Multiplier
president obama 3.27 obama 6.49
immigrants 3.40 illegal immigrants 4.87
clean energy 6.24 energy 1.97
the wealthy 4.21 wealth 2.11
learning 6.36 earning 1.38
muslims 3.46 the muslims 1.75
senator sanders 3.18 bernie 6.20

How the Model Works:

Defining the problem: What makes a sentence “Trump-y?”

I decided that the best way to quantify ‘Trump-iness’ of a sentence was to train a model to predict whether a given sentence was said by Trump or Clinton. The Trumpiest sentence will be the one that the predictive model would analyze and say “Yup, the chance this was Trump rather than Clinton is 99.99%”.

Along the way, with the right model, we can ‘look under the hood’ to see what factors into the decision.

Technical details:

The goal is to build a classifier that can distinguish between the candidate’s sentences optimizing for ROC_AUC, and allows us to extract meaningful/explainable coefficients.

Gathering and processing the data:

In order to train the model, I needed large bodies of text from each candidate. I ended up scraping transcripts from events on C-SPAN.org. Unfortunately, they’re uncorrected closed caption transcripts and contained plenty of typos and misattributions. On the other hand, they’re free.

I did a bit to clean up some recurring problems like the transcript starting every quote section with “Sec. Clinton:” or including descriptions like [APPLAUSE] or [MUSIC]. (Unfortunately, they don’t reliably mark the end of the music, and C-SPAN sometimes claims that Donald Trump is the one singing ‘You Can’t Always Get What You Want.’)

Technical details:

I ended up learning to use Python’s Beautiful Soup library to identify the list of videos C-SPAN considers campaign events by the candidates, find their transcripts, and grab only the parts they supposedly said. I learned to use some basic regular expressions to do the cleaning.

My scraping tool is up on github, and is actually configured to be able to grab transcripts for other people as well.

Converting the data into usable features

After separating the large blocks of text into sentences and then words, I had some decisions to make. In an effort to focus on interesting and meaningful content, I removed sentences that were too short or too long – “Thank you” comes up over and over, and the longest sentences tended to be errors in the transcription service. It’s a judgement call, but I wanted to keep half the sentences, which set cutoffs at 9 words and 150 words. 34,108 sentences remained.

A common technique in natural language processing is to remove the “stopwords” – common non-substantive words like articles (a, the), pronouns (you, we), and conjunctions (and, but). However, following James Pennebaker’s research, which found these words are surprisingly useful in predicting personality, I left them in.

Now we have what we need: sequences of words that the model can consider evidence of Trump-iness.

Technical details:

I used NLTK to tokenize the text into sentences, but wrote my own regular expressions to tokenize the words. I considered it important to keep contractions together and include single-character tokens, which the standard NLTK function wouldn’t have done.

I used a CountVectorizer from sklearn to extract ngrams and later selected the most important terms using a SelectFromModel with a Lasso Logistic Regression. It was a balance – more terms would typically improve accuracy, but water down the meaningfulness of each coefficient.

I tested using various additional features, like parts of speech and lemmas (using the fantastic Spacy library) and sentiment analysis (using the Textblob library) but found that they only provided marginal benefit and made the model much slower. Even just using 1-3 ngrams, I got 0.92 ROC_AUC.

Choosing & Training the Model

One of the most interesting challenges was avoiding overfitting. Without taking countermeasures, the model could look at a typo-riddled sentence like “Wev justv don’tv winv anymorev.” and say “Aha! Every single one of those words are unique to Donald Trump, therefore this is the most Trump-like sentence ever!”

I addressed this problem in two ways: the first is by using regularization, a standard machine learning technique that penalizes a model for using larger coefficients. As a result, the model is discouraged from caring about words like ‘justv’ which might only occur two times, since they would only help identify those couple sentences. On the other hand, a word like ‘frankly’ helps identify many, many sentences and is worth taking a larger penalty to give it more importance in the model.

The other technique was to use batch predictions – dividing the sentences into 20 chunks, and evaluating each chunk by only training on the other 19. This way, if the word ‘winv’ only appears in a single chunk, the model won’t see it in the training sentences and won’t be swayed. Only words that appear throughout the campaign have a significant impact in the model.

Technical details:

The model uses a logistic regression classifier because it produces very explainable coefficients. If that weren’t a factor, I might have tried a neural net or SVM (I wouldn’t expect a random forest to do well with such sparse data.) In order to set the regularization parameters for both the final classifier and for the feature-selection Lasso Logistic Regressor, I used sklearn’s cross-validated gridsearch object, optimizing for ROC_AUC.

During the prediction process, I used a stratified Kfold to divide the data in order to ensure each chunk would have the appropriate mix of Trump and Clinton sentences. It was tempting to treat the sentences more like a time series and only use past data in the predictions, but we want to consider how similar old sentences are to the whole corpus.

Interpreting and Visualizing the Results:

The model produced two interesting types of data: how likely the model thought each sentence was spoken by Trump or Clinton (how ‘Trumpish’ vs. ‘Clintonish’ it is), and how any particular term impacts those predicted odds. So if a sentence is predicted to be spoken by Trump with estimated 99.99% probability, the model considers it extremely Trumpish.

The term’s multipliers indicate how each word or phrase impacts the predicted odds. The model starts at 1:1 (50%/50%), and let’s say the sentence includes the word “incredible” – a Trump multiplier of 7.42. The odds are now 7.42 : 1, or roughly 88% in favor of Trump. If the model then sees the word “grandmother” – a Clinton multiplier of 6.12 – its estimated odds become 7.42 : 6.12, (or 1.12 : 1), roughly 55% Trump. Each term has a multiplying effect, so a 4x word and 2x word together have as much impact as an 8x word – not 6x.

Technical details:

In order to visualize the results, I spent a bunch of time tweaking the matplotlib package to generate a graph of coefficients, which I used for the pronouns above. I made sure to use a logarithmic scale, since the terms are multiplicative.

In addition, I decided to teach myself enough javascript to learn to use the D3 library – allowing interactive visualizations and the guessing game where players can try to figure out who said a given random sentence from the campaign trail. There are a lot of ways the code could be improved, but I’m pleased with how it turned out given that I didn’t know any D3 prior to this project.

5 Responses to Quantifying the Trump-iness of Political Sentences

  1. Brent says:

    I’m geeking out over here. 🙂 Thanks; I hope you’ll post more like these.

  2. cmplxadsys says:

    I’m considering learning some data science. What’s you’re current strategy for learning? How’d you get the basics down? What would you recommend a relative newbie do to start?

    • Jesse Galef says:

      [tl;dr: for free intros, I consider the Python Codeacademy and the Machine Learning Coursera course to be valuable for starting out.]

      Great! I’m finding it to be a fascinating subject with a huge range of directions to explore. I’d consider what you find the most engaging – the visualization process, the machine learning algorithms, the statistical analysis techniques?

      In terms of getting started, there are different options with different price tags. I started out by taking a course with the General Assembly “bootcamp” (and went back to T.A. for one of their later cohorts). I found the in-person format and the added structure to be helpful for me, but I can’t promise it’s worth the cost for everyone. There’s just so much to cover – the biggest takeaway for me was a sense of what things are possible so that I could look into them further on my own.

      On the other end of the spectrum, I got a lot out of the Machine Learning coursera course taught by Andrew Ng at Stanford – it’s free and really well done. There are also free online programming courses that teach Python step-by-step with CodeAcademy (what I used to brush off the rust) or DataCamp.

      Possibly the best thing for me was having specific projects I enjoyed which motivated me to learn new skills – getting stuck and realizing that I needed to learn web scraping or propensity score matching or more advanced programming in order to get past a roadblock.

      • cmplxadsys says:

        Thanks! I’m specifically interested in data science as applied to medicine, as I’m coming from a medical background. While all three areas seem engaging, I lean toward statistical analysis and machine learning. But part of my goal in dipping my toes in data science is to explore exactly the question you asked: what do I find most engaging, and do I think it’s worth a career switch (or quasi-switch, as I’d still be in medicine overall).

        I have some quantitative training in addition to my medical background, so I suspect I can handle the theory. But my coding’s weak (but present).

    • Jesse Galef says:

      [Possibly helpful, possibly daunting infographic: infographicjournal.com/wp-content/uploads/2014/11/How-to-become-a-data-scientist1.jpg]

Leave a Reply to Jesse GalefCancel reply

Discover more from Measure of Doubt

Subscribe now to keep reading and get access to the full archive.

Continue reading