I’ve been waiting for the chorus of pundits who were wrong to be finished telling us why the polls they believed were also wrong. As I’ve been forced to say before and will say more forcefully in the future, most of these “pundits” are nothing more than glorified poll-readers. As a result, they too were embarrassed on Election Day because the polls they deemed to be the industry “Gold Standard” were the only ones they paid any mind.
They are basically know-nothings posing as statisticians who based their predictions on inaccurate assumptions derived from flawed data, which were derived from inaccurate assumptions made by other flawed statisticians.
I know, it’s enough to make your head spin.
The polls in 2016 were wrong for a reason. In fact, there are several reasons for the industry-wide failure and, thus far, I have heard nothing to convince me they intend to change anything to fix it. The excuse-making in the wake of this monumental disaster greatly disturbs me because the industry not only gauges public opinion but can also shape it.
I’d expect ethical people to truly want to get to the bottom of the problem.
Sean Trende, the analyst at RealClearPolitics.com, tried to make the case that polling in 2016 was more accurate than 2012. He attributed the miss to sampling errors and argued the results were basically within the margin of error in most cases. Nate Silver, the oft-cited “statistician” and election forecaster who has now blown three straight election cycles himself, basically blamed the failure on media complacency.
Let me just flat out call this what it is: nonsense. There was plenty of evidence to suggest pollsters were, at best, wildly inaccurate and, at worst, flat-out engaged in unethical behavior.
I’ll point the industry in the right direction, but I’m damn sure not going to give up proprietary information. I’m now America’s most accurate pollster and election forecaster, for the second straight cycle. I don’t have anything to prove to anyone. This column is more for election-watchers and American voters than it is for Big Media or others who are failing in the industry.
Before we get into it, let’s remember just how much bogus polling was impacting the election narrative, a narrative that turned out to be completely false.
In August, Chris Stirewalt, the digital politics editor at Fox News, mocked Trump and his supporters for not believing the polls. In Don’t kid yourself, the polls are usually right,” Stirewalt claimed the data suggested the Republican nominee was headed for “the worst popular vote defeat since 1984.”
Instead, President-Elect Donald Trump won Blue States that Democrats haven’t lost since 1984, states up until Election Day pollsters showed he was going to lose outside of the margin of error, despite Mr. Trende’s claim to the contrary.
The Marquette University Law School Poll showed Mr. Trump trailing Mrs. Clinton by 6 points, the same margin Mitchell Research found him trailing in Michigan. The results were in complete contradiction to the final PPD Battleground State Polls showing a statistical tie that was slightly leaning in favor of Mr. Trump.
Mr. Stirewalt’s claim at least historically has largely been true. But that’s no longer the case and, worse, there were in fact signs to suggest it isn’t. “This is the dadgum presidency” was his argument against those citing the U.K. referendum known as Brexit. He even mocked a tweet the candidate himself sent out that very morning vowing pundits like Mr. Stirewalt would “soon be calling me MR. BREXIT!”
For a “primer on the validity of polling” he directed readers to check out “Nate Silver’s treatise on the topic,” which argued polls on the presidential level have been right for generations.
You get the picture. Let’s move on to what actually happened and what, in all likelihood, is going to happen again. No one single problem is responsible for the failure, which differs depending on the polling firm. But a good place to start is the abysmal response rates for random-sample polls conducted over the phone.
A few days before the election, I was listening to Bret Baier on Special Report interview the pollster who conducts the Fox Poll. His final survey found Mrs. Clinton ahead by 4 points and he used the oldest analogy in book, the one that argues you don’t need to consume the entire bowl of tomato soup just to taste it and determine how hot it is.
Well, think of the bowl of tomato soup now being a bowl of beef stew and the tablespoon is so small you are only able to taste the broth. Meanwhile, you are missing the potatoes, the celery, the carrots and, most importantly, the beef.
Put simply, the response rates are now so low pollsters aren’t able to predict the composition of the electorate, let alone their voting preference.
The results for obvious reasons can be disastrous and the truth of the matter is that many in the industry know what I’m saying is the truth. It’s just too damn expensive to do what needs to be done (which you can help us with).
In early October, when the PPD Poll still showed a far more competitive race than the RCP average indicated, a former executive at Neilson and a former executive at a company now owned by Neilson, both sent emails commending me for standing my ground. One told of his own difficulties with response rates and another cited recent findings by Pew Research Center.
Yet another person familiar with the Monmouth Poll told me Patrick Murray was “tipping the scale” by filling in the blanks created by insufficient data.
His final poll showed Mrs. Clinton leading nationally by 6 points and his state polls consistently found less support for Mr. Trump juxtaposed to other polls conducted during the same period.
SAMPLE RESPONSE BIAS
When I defended the LA Times Poll in October, which was criticized as “experimental” because it re-interviewed the same Internet panel, I was largely addressing something called sample response bias. Research at PPD and Columbia University unequivocally support the theory that large swings–such as the swings in favor of Hillary Clinton after the Access Hollywood tape was leaked, or to Mitt Romney after the first debate against Barack Obama in 2012–are actually artifacts of the polling sample, itself.
Rather than legitimate swings in voter preference, they are the result of partisans being more or less likely to agree to participate in the poll when the phone rings. This can depend on events during the campaign, but when done correctly Internet polling largely protects against these artifacts.
And I’m not the only one who believes this is a reality.
In section “d) Media (paid), (earned) and (social), and polling” outlined in his email to the Clinton campaign, Google CEO Eric Schmidt discusses the use of Internet polling over phone polling.
“Find a way to do polling online and not on phones,” he wrote. It was a strategy Clinton campaign manager Robby Mook obviously agreed with. As revealed by WikiLeaks, the Clinton campaign not only didn’t believe the phone-based media polls before the Michigan primary but actually anticipated a loss to Bernie Sanders.
Strange, considering public polls gave her a double-digit lead, don’t you think?
WEIGHTING FOR PARTY ID
In 2012, weighting for party identification was largely frowned upon and yet, for some reason, it was the standard this year outside of PPD and Selzer & Co. We weight for the usual demographics, apply a proprietary likely voter model and let the electorate tell us what it will look like. In 2016, in order to get the outcome they wanted, pollsters decided what they thought the electorate would look like and adjusted.
That’s backward. If a pollster has a quality sample, then the electorate will speak to them if they are listening. They shouldn’t ignore what respondents are trying to tell them.
Our final PPD U.S. Presidential Election Daily Tracking Poll reflected a D/R/I split of Democrat +4, which is exactly what it turned out to be. Pollsters weighted their sample to reflect the electorate they wanted to show up on Election Day, not the electorate voters were telling them would show up.
We will get into this more in the coming days and we will name polling firms specifically. But we know for a fact certain pollsters were willing to jeopardize their reputations rather than tell the truth and risk galvanizing Trump voters. Living in a battleground state, my wife and I were polled frequently. One live interviewer, who could barely speak English, actually called the house and specifically asked to speak with my wife.
She is a 30-something year-old Hispanic female statistically likely to be a Democrat. They had no intention of speaking with me, a 30-something year-old man statistically likely to be a Republican.
We asked, and were shocked to learn it was a firm that conducts fieldwork for a joint poll sponsored by two major media outlets.
So far, you’ve read a mini-novel and I’ve not even scratched the surface. I can’t imagine what went wrong.