How do pollsters determine when an incumbent is vulnerable, especially in a primary? My NationalJournal.com column this week takes up the subject in the context of two recent tests of potential primary contests facing incumbent Senators Arlen Specter and Christopher Dodd.
Two additional thoughts that did not make it into the column:
First, it is not entirely clear (to me) that a classic likely voter screen based on self-reported intent to vote would have produced different results for Joe Lieberman in early 2006 on the initial Democratic primary head-to-head versus Ned Lamont in May of 2006 (I reference both in the column). It may have taken the actual campaign and the awareness it created of Lamont's challenge, to trigger real enthusiasm and intent to vote among the anti-war Democrats that gave Lamont his margin of victory. What might have been useful in early 2006, however, would have been a look the intensity of attitudes about both Lieberman and the Iraq War among Democrats with a true history of primary voting in Connecticut (and not just all registered Democrats). Were hard core primary voters different than other registered Democrats?
Second, it should go without saying, but "vulnerability" is just the first necessary step in defeating an incumbent office-holder. The second and more critical step is a challenger that voters perceive as viable and able that makes a convincing case for why the incumbent should be turned out of office. Many a vulnerable incumbent never faces a truly viable challenger, and many of those that do are able to raise enough doubts about the challenger to win reelection. My guess is that in any given election cycle, there are far more incumbents that we could theoretically describe as "vulnerable" than that ultimately lose.
Update (6/3) - I posted these comment separately, but they bear repeating here:
One reader emailed to take strong exception to my use of the word "misleading" in the following sentence from the column:
One reason for the misleading early numbers in 2006 may have been that Quinnipiac sampled all self-identified registered Democrats rather than a narrower subset of likely primary voters. Their May 2006 sample of 528 Democrats, for example, amounted to 34 percent of the full sample of 1,536 registered voters they interviewed. Yet the actual Democratic primary turnout amounted to just 15 percent of Connecticut's active registered voters.
I will grant that I could have chosen a less loaded word than "misleading," as some will hear it as an insinuation about the pollster's motives or the accuracy of data. For the record, I do not believe that anyone involved in producing the Quinninpiac Poll meant to mislead anyone, and did not mean to imply that the data they reported were inaccurate. The record should show that once they shifted to reporting vote preferences among a narrower group of "likely voters," they showed Lamont running much closer to Lieberman in June, "inching ahead" in July and ultimately leading by a wide margin in early August. Their final poll showed Lamont leading by six percentage points. He won by four -- that's as close as any poll should expect to get.
The larger point I was trying to make with the column is that we mislead ourselves -- and by we I mean all of us, pollsters, journalists, campaigns, political junkies -- whenever we treat samples of a third to half of adults in a state as a meaningful measure of the preferences of "likely primary voters" when the actual turnout is typically a much smaller fraction of adults.
My use of the Quinnipiac Poll was also largely a coincidence. They happened to produce two polls last week with primary head-to-head questions, one in Connecticut, and a similarly designed poll in Connecticut three years ago. However, their practices in terms of sampling primary voters are very similar to those used by most other media pollsters.
How will Trump’s administration impact you? Learn more