PPD Polls Conducted By
https://www.bigdatapoll.com/
Wednesday, May 23, 2018
HomeAboutPPD Poll

About the People’s Pundit Daily Big Data Tracking (PPD) Poll

Most Accurate Poll of 2016

The Big Data Poll, known on People’s Pundit Daily as the PPD Poll, follows level 1 AAPOR standards of disclosure and WAPOR/ESOMAR code of conduct. All publicly released surveys are subscriber– and individual reader donations-funded, not sponsored by any other media outlet, partisan or political entity.

Here’s the methodology behind the how we do it.

Panel Respondents

Interviews are conducted via the Big Data Poll National Internet Polling Panel and partners, which, in total, boasts a reach of roughly 20 million. Interviews are NOT conducted on most national holidays and, depending on the survey method–random sample vs opt-in–samples can include various percentages of repeat interviews from panelists, which we have demonstrated provides a more accurate gauge of shifting public opinion.

Respondents are recruited by numerous methods, including mailers, email, social media- or website-based advertisements, etc. Panelists also have the option to sign up for the panel, as with other Internet survey panels such as SurveyMonkey and YouGov. The difference, which accounts for the disparity in results, is our initial and likely voter screens.

During screening or initial interviews, respondents are asked to give their names, contact information (i.e. email and/or phone), as well as the city and zip code where they are registered or plan to register to vote. This is so we can attempt to re-interview, to obtain regional data etc. The information is also used to verify registration status when possible (based on the state) for respondents who were not already identified.

We’ll attempt to contact a respondent for a repeat interview up to 10 times before removing them from the panel.

Samples

The Big Data Poll conducts two different types of surveys–random samples and opt-in Internet panel–depending on clients’ needs, goals and objectives.

Random

We select a random sample of panelists to take part in our surveys. But rather than drawing from a list of phone numbers, instead we randomly draw from panel respondents. We ask our respondents at least about the demographics detailed below, whether they are registered to vote, what state they live in, etc. just as phone-based pollsters have done for years.

In these samples, we calculate a traditional margin of error (MoE).

Internet Opt-In Panel

In this sample, all responses are treated as “opt-in Internet panel” even though a percentage of respondents were specifically targeted based on registration status (more on that below in population). They are still ultimately considered opt-in and we do NOT treat them as a random sample.

In these samples, we use a bootstrap method with a standard 95% confidence interval (CI).

Population

For political & election surveys, top line results are of likely voters, or at least our best estimate of registered voters we view to be most likely to vote based on past voting history, enthusiasm and registration status.

We do not includes respondents who report they are not registered during initial interviews in the results, but also don’t immediately remove them as a potential panelist. They may register in the future and we view them as worthy of a follow-up and, in the past, we’ve found follow-up greatly reduces the probability of unintentionally excluding new voters of various ages.

Weighting

The Big Data Poll is weighted for demographics such age, gender, race, income, education and region based on the Current Population Survey conducted by the U.S. Census Bureau and Bureau of Labor Statistics. For political surveys, registration targets are also obtained from the most recent Current Population Survey.

The Big Data Poll uses a proprietary likely voter model based on responses to screening questions relating to prior voting history, enthusiasm and registration status etc.

The poll does oversample primarily as a result of the entire sample being of registered voters and the use of the likely voter model. Because of how rigorously we screen, the disparity is not typically significant.

There has long been a debate among pollsters about whether to weight for party identification. Put simply, in 2016 we let the electorate tell us what it will look like, while other pollsters decided beforehand who they believed would vote on Election Day and adjusted accordingly.

Our philosophy is simple: That’s backward. If a pollster has a quality sample, then the electorate will speak to them if they are listening. They shouldn’t ignore what respondents are trying to tell them and they shouldn’t prejudge the electorate and allow their own biases to taint their methodology.

Margin of Error (MoE)

For random sample surveys, we calculate a standard margin of error (MoE) using the total known or estimated population size, sample size and confidence level (%).

Bootstrap Confidence Internal

No pollster can accurately estimate a traditional margin of error (MoE) if the sample is not a probability sample using a random selection. Though we don’t know exactly who will and will not respond to the panel for a given day in a survey, they are still treated as opt-in.

Instead, we use a bootstrap method with a standard 95% confidence interval. Admittedly, this can be a bit difficult with more than one or two choices, but we have had success in accounting for this in the past.

To reduce human error, we use StatKey to calculate the 95% confidence interval by generating 5,000 responses with the weighted results.

Survey Wording & Order

Whether we call a respondent to request or conduct a repeat interview, they take it online or via email, the questions are worded exactly as they are in the link below and vote preference choices are randomized. (Caller A = Trump is the first choice; Caller B = Clinton is the first choice; Caller C = Johnson is the first choice; Caller D = Stein is the first choice.)