Ryan Plus One Week: No Bounce

The Obama-Romney margin at the national level, 1.3 percentage points as of today, is right at the edge of conventional levels of statistical significance; the probability that Obama leads Romney in national level voting intentions is about 80 percent.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Since presumptive Republican nominee Mitt Romney announced his choice of Paul Ryan as a vice-presidential running mate, HuffPost Pollster has logged five national polls and 11 state-level polls, all fielded since the news broke. The bottom line? No real change in Obama-Romney vote preference, at least the national level.

Using a tracking model and algorithm I've developed exclusively for Pollster, I estimate that Obama's share of national level voting intentions is virtually unchanged, ticking up from 46.2 percent on Friday, Aug. 10 (the day prior to the Ryan announcement) to 46.3 percent today, a mere 0.1 percentage point increase (see the graph accompanying this post). Romney's numbers are off by the same amount, falling from 45.1 percent on Friday August 10 to 45 percent today.

These results are consistent with the stability in voting intentions that we've seen through Spring and Summer. Voting intentions today are all but indistinguishable from where they have been for months, with Obama maintaining a small lead over Romney. The Obama-Romney margin at the national level, 1.3 percentage points as of today, is right at the edge of conventional levels of statistical significance; the probability that Obama leads Romney in national level voting intentions is about 80 percent.

In the coming weeks and months, I will provide the details on the model and its underlying assumptions in a series of posts in this space. For today, let's start with an initial sketch of how the model works.

2012-08-21-usa.jpg

The model has its origins in work I did with Doug Rivers at Intersurvey/Knowledge Networks in 2000. The idea was to take the high volume of polling data generated by Intersurvey -- pool it with other national and state-level polling -- to produce state-by-state estimates of voting intentions. Computer-intensive simulation methods then translate the state-level estimates into an estimated Electoral College count for each candidate, along with "margins of error." Our Election Eve forecast over the Electoral College from 2000 appears here. These ideas -- combining polls to form state-level estimates of vote shares, using simulation methods to characterize the resulting uncertainty over the implied Electoral College vote -- lie at the heart of other poll averaging approaches, such as Nate Silver's and Drew Linzer's.

The model has three key features:

(1) How does support change over time? The model assumes that absent polling information to the contrary, the best guess for tomorrow's voting intentions are today's vote intentions (a "random walk" model). On any given day, support for Obama may be higher or lower than yesterday's level (and this is what the polls tell us), with the amount of day-to-day volatility a key parameter in the model.

Up through this stage of the campaign vote shares have generally been extremely stable: given today's vote shares, the "95 percent MOE" on tomorrow's vote shares is just "today +/- 0.45 percentage points." This said, the model can and will detect capture abrupt or flamboyant changes in underlying preferences; the stability in voting intentions recovered by the model seems real, at least for now.

(2) Polls aren't perfect. Polls have "noise" due to sampling error, which is typically reported as the "margin of error" of the poll. For simple random samples, sampling error (and hence the width of the "MOE") decreases at rate "root n" (e.g., doubling a poll's sample size produces a 141% reduction in the MOE). The contribution of a poll to the model's estimates is a function of the poll's sample size (inter alia). Of course, few polls are generated by simple random samples; the use of weights to improve the representativeness of the poll usually means that the poll isn't a simple random sample, but that is a topic for another day.

Even with the use of weights, polls usually have other errors. Sampling procedures, field periods, the choice of sampling frame (RDD vs landline-only vs web panelists) are typical sources of error; weighting attempts to deal with errors of this sort. Additional error can arise from question wording effects (if/how are "someone else", "don't know" or "not voting" options offered to the respondent?), question order effects, or even social-desirability effects (respondents giving responses that they think are "expected" of them). These sources of error are much harder to overcome via weighting.

All these sources of survey error we treat as systematic or "hard-wired" into the way survey houses generate their estimates. The net error associated with a survey house is known as a "house effect." The fact that particular pollsters tend to produce numbers that are either pro-Obama or pro-Romney will be familiar to regular visitors to these pages. Suffice to say that these house effects are an important part of my model.

(3) Exploiting non-poll information. There is a lot (a lot!) of information about the likely outcome of the 2012 presidential election in the results from previous the last three or four presidential elections. One doesn't need a PhD in political science to see that states like Ohio and Florida have been key in recent elections. Some states track the national outcome more reliably than others. Geographic patterns in state-level, presidential election outcomes are well known (or if not, easily discovered). Thus, a poll in one state is informative about vote shares in "politically similar" states too. These between-state correlations play a key role in my model. In addition, "home state" effects (the tendency for a candidate's home state to display a higher-than-average swing toward that candidate) are factored into the modeling.

I'll post about these particular facets of the modeling in the weeks to come. For now, the conclusion is that the Veep-announcment hasn't closed the small, but stubbornly persistent lead Obama enjoys over Romney. In the weeks ahead we'll see what the model is saying about the likely outcome of the election, state-by-state numbers, the reaction of the electorate to the big "set pieces" of the campaign (the conventions, the debates), and the big ad-buys we're yet to see drop in battleground states.

Popular in the Community

Close

What's Hot