Preparing for Future: Smartphone Only Polling

Preparing for Future: Smartphone Only Polling
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

In collaboration with Tobias Konitzer and Sam Corbett-Davies, both graduate students at Stanford, I have taken on two different types of opt-in polling this year: an old-school internet display poll on a portal (MSN) and a mobile-only poll, distributed via the Pollfish application in mobile-apps. Last week I wrote about the MSN work on PredictWise, this week I want to talk at more depth about the Pollfish work. Pollfish is providing us two important things: (1) in-depth understanding of the good and the ugly of mobile-based polling (i.e., the future of all polling!) (2) fast, cheap, and accurate reflection of public policy that we have been using to inform the impact of the 2016 election.

To keep our polling fast and cheap, we survey 1,000 random respondents from the Pollfish panel with no sample stratification (i.e., demographic quotas). The retail price for this is $1,000, yielding completed responses in 2-3 hours.

From the beginning of January we have been asking 13 public policy questions each week on key topics: abortion, anti-discrimination laws for protecting sexual orientation, government efforts to reduce inequality, gun control, immigration, Iran, maternity leave, Medicare spending, government regulation, taxes for incomes over $250,000, free trade, global warming, and police brutality. Further, we ask for voter intention and party identification. Pollfish gives the following demographics for free: age, gender, education, income, race, and state.

The raw data is good, but: no one publishes raw data from crowdsourced polls. The text-book example of the danger of that approach is the 1936 Presidential election, when Literary Digest predicted a Republican land-slide based 2 Million mail-in postcards – of course, the opposite was the case – Democrat Roosevelt won by a landslide. However, we have repeatedly shown that crowdsourced polls in combination with bleeding edge statistical analytics and access to population-level data can produce representative estimates, and we are excited about the innovation around mobile-application data.

Translating the raw data into a representative estimates is an evolving process, but an important one, because we believe that all polling will be smartphone based within a few years. Why? Can you imagine landline telephones in 12 years? Can you imagine people answering random phone calls on their fully caller-id tablet phones? Regardless of the mode (i.e., telephone, online, etc.) response biases are going to be rampant, eradicating differences between polls with a representative sample frame and crowdsourced polls. If people are going to answer polls they are likely to be online, where online means mobile. So, it is critical that we begin the process of understanding the polling mode of the future.

First, we model the raw response data of vote intention (and every public policy question), given the following respondent characteristics: age, gender, state, education, race, marital status, and party identification. This information divides the population into thousands of categories of demographics, and we predict the percent of people in each category that would vote for Clinton, Trump, or Other (or for and against issues), if the entire country showed up to the poll. Every one of those predictions is informed by all polling responses, not just each day, but in the past as well. In more technical terms, we have added complex dynamics to the model that allows us to parse out variance in sample composition over time from true swings over time. This is really important, because some/most of the demographic combinations do not answer our poll on any given day in the same numbers. Notably, this is a major advancement over what we did last cycle.

The models start running on the first day when our survey was in the field and creates a series of coefficients for each demographic to predict the marginal impact of that demographic on how people answer the voter intention question. Unlike previous years, we then run the model for the next day with the previous day's coefficients as a sort of baseline. We restrict how much volatility these coefficients can have each day. The more we see of a demographic, the more we allow the related coefficient to be completely defined by each day's polls, versus from all of the polling. For instance, assume that the marginal impact of being male versus female is allowed to evolve based on each day’s polling, because we see a lot of men and women. But, the coefficient for North Dakota is basically derived from all of the data, because we do not see too many people from North Dakota.

Second, we then projected these predictions on our best-estimate of the likely voting population, a process known as post-stratification. Specifically, we then weight our predictions by the known fraction of that demographic of the overall target population, likely voters. To derive the target population, we leverage a combination of population-level Census and voter file data, provided by TargetSmart, as well as the latest polling on party identification.

From this modeled and post-stratified data we can estimate (1) the proportion of voters in each of these categories who will turn out to vote in November, and (2) how each demographic bucket is likely to vote. This is a major improvement over conventional methods that have to estimate the likely voting population and vote intention separately. Here is the national voter intention through today along with a state-by-state outcomes (click her to go to updating infographic!). Remember, this is one experimental poll, not the aggregation of polls you see on Huffington Post’s Pollster (so do not email me about individual state outcomes, I trust the polling averages, not one poll). But, it also means that the comparison we are up against, polling averages models, are much more expensive and take much to process. While I would not use a 50-state smart-phone only poll right now to decide on the detailed allocation of my $1 billion election advertising budget (if I were a 2016 presidential campaign), I would certain use it to get a good idea of the landscape.

This chart is the product of 46 unique polls answered by 46,000 unique respondents over 11 months. Our respondents do not resemble the voting population; if we just reported the raw data, we would be in trouble. Only 14 percent of respondents are over 55 years old versus about 47 percent of the voting population. And 68 percent are female versus about 53 percent of the voting population. But, we believe that the transformed data provides meaningful information about many segments of the population. You can see below, that we can project not just state-by-state, but detailed demographics. Here is an example of one our public policy questions:

Early in the primary process we showed how Donald Trump’s positions aligned well with the Republican voters, relative to the other Republican primary candidates. By having high frequency data, we showed that support for gun control shot up in the weeks following the Orlando night club massacre, but then dropped back near pre-shooting levels quickly. Using the flexibility, we quickly gauged support for statements Trump made in the second debate hours after the debate. Trump made his campaign by promising to reverse the flow of immigrants into America and it is popular position. Click here to explore more!

We are excited by both the promise and present of mobile-application-based polling. We want to take advantage of this unique data source now, but, in many ways more important, ensure we have the technology necessary to harness this data source when, in a few cycles, it is the only one left!

Written with Tobias Konitzer and Sam Corbett-Davies, both graduate students at Stanford.

Popular in the Community

Close

What's Hot