NY-23 Watch (Tuesday)

NY-23 Watch (Tuesday)
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Yesterday, the Club for Growth, an organization that backs conservative Republicans, released a new survey it conducted on the special election in New York's 23rd District that showed Doug Hofffman, the Conservative Party candidate it has endorsed, running a few points ahead of Democrat Bill Owens with Republican nominee Dede Scozzafava in third. This result differed from two surveys conducted earlier in October by Siena College and Daily Kos/Research 2000 that showed Democratic nominee Bill Owens narrowly leading Republican Dede Scozzafava with Hoffman in third.

Is the Club for Growth result cooked? That's what Nate Silver strongly implied in post last night headlined, "Reality Check: NY-23 Poll May Seek to Alter, Not Reflect, Reality." Let's take a closer look.

Dates







Undecided









22



600 LV











29








28








17









31

The table above shows the results from all six polls that have been publicly released so far for this race. The Club for Growth/Basswood survey is different, in that it shows Hoffman running seven percentage points higher than the two surveys conducted earlier in the month. (UPDATE: a new survey conducted by Neighborhood Research, sponsored by another Conservative group and released while I was drafting this post has results consistent with the CFG/Basswood survey).

However, the trend is consistent with earlier results and recent news. The two surveys from Siena College in late September and mid-October track a seven point decline in Scozzafava's support and a six point increase for Hoffman. Quite a bit also happened over the last week. Last Wednesday night, while the Daily Kos/Research 2000 poll was still in the field, Sarah Palin endorsed Hoffman on her Facebook page, a development that subsequently received national attention. Hoffman also received endorsements from Steve Forbes and Rick Santorum on Friday. Basswood Research conducted their survey on Saturday and Sunday.

Let's be clear: It is always sensible to treat sponsored, internal surveys with extra skepticism when they are publicly released. Political scientists that have studied public polls (examples here and here) find that partisan surveys typically show a an average bias of 2 to 4 percentage points favoring the sponsoring party. One reason for this phenomenon is that most internal polls never see the light of day. Campaigns typically choose to share only those polls showing good news, not bad.

But in this case, Nate Silver is making a considerably stronger accusation. After running through a list of concerns, Silver concludes:

[T]his is very probably not a case, a la Strategic Vision, where the numbers were simply fabricated. But there's an awful lot that a pollster can do short of making up numbers -- asking leading questions, applying implausible likely voter models or demographic weightings, selecting an unorthodox sample frame, etc. -- to produce a result that fits its desired narrative.

Do we have any evidence that Basswood Research used leading questions, implausible likely voter models or demographic weights or an unorthodox sample frame used to produce its survey? Let me take these issues one by one.

Leading questions? Silver concedes in an update that Club for Growth posted a complete filled-in questionnaire that he had not seen when writing his post, although he hints at "fresh" questions raised by seeing the full text. I am not sure what he's referring to, as I see nothing that would obviously bias the result in Hoffman's favor or that deviates sharply from the standard practice of campaign pollsters. For what it's worth, the Basswood questionnaire provides more complete disclosure than the other public polls, in that it provides full text and results of the demographics (omitted by Siena) and the full text of the likely voter screen questions (omitted by both Research 2000 and Siena).

Implausible demographic weighting? Silver is concerned that "[o]nly 14 percent of the likely voters in this poll are age 40 or under, as compared with about 40 percent in the Research 2000 poll." I'd agree with FiveThirtyEight commenter Matt Hogan that if anything, it's the Research 2000 age composition that's implausible. Nearly half (49%) of their likely voters are under 45 years of age. Both the national and New York exit polls for the 2006 general election report only 36% in that age category, and if anything, exit poll estimates tend to be too young.

The sample was also weighted geographically, according to Basswood pollster Jon Lerner, so that the percentage contributed by each county in the sample conforms to the distribution of voters in the 2008 and 2006 elections. I have not attempted to gather county level vote returns for NY-23, but Basswood included the weighted value for each county in the filled-in questionnaire so anyone can evaluate its geographic representation. Among campaign pollsters, that sort of geographic weighting is standard practice.

Unorthodox sample frame? Hardly, although there is an important difference in the sample frames being used in NY-23. Siena College and Research 2000 are using a random digit dial sample -- one that reaches every working landline phone in the district by randomly varying the final digits of telephone numbers in exchanges within the District. When I spoke to him by phone last night, Basswood pollster Jon Lerner confirms that he sampled from a list of registered voters, selecting those who had cast ballots in either the 2006 or 2008 general elections.

While pollsters continue to debate the merits of samples drawn from voters lists versus random digits, the use of lists to survey congressional districts is hardly unorthodox. Pollsters have used list samples to conduct the vast majority of congressional district polling over the last several decades, since gerrymandered district boundaries make random digit sampling impractical in most districts. Telephone exchanges are a crude match to geography below the county level and very few voters can identify their district number when asked. The only reason that an RDD sample is even an option in NY-23 is that most of the district falls within eight undivided counties, leaving only a small portion in three counties that are divided between districts.

Implausible likely voter model? I don't see it. While pollsters differ wildly in their likely voter selection or modeling techniques, the screen used by Basswood seems reasonable and appears to fall within the norms of typical pollster practice.

Let's run the numbers: In last year's general election, according to the Almanac of Politics, just over 253,000 voted for either Obama or McCain in the 23rd District (Voter Contact Services puts the total turnout at 258,000). According to Wikipedia, 199,103 cast a ballot for Congress there in 2008. The nearby 20th District of New York provides another useful statistic -- 160,940 showed up for a special election held there in April. While no one knows for sure how many will turn out next week, it is likely to fall somewhere in the neighborhood of 200,000 (or lower; see the first update below).

So how does the Basswood model compare? The firm Voter Contact Services, which sells list samples, reports that it has 267,599 voters identified as having voted in a general election between 2006 and 2008 -- so that's roughly the population that Basswood sampled. The key question is how many of the sampled voters passed Basswood's screen question, which accepted those who say they are "very likely" to vote in next week's election, but terminates those who say they are only "somewhat" or "not likely" to vote. I do not have the terminate data from Basswood, but when the AP/IPSOS poll asked a similar question of adults using a 10-point screen in early October 2006 (via the Roper Center iPoll database), 69% choose the most extreme "completely certain to vote" response.

That result is typical. Screens based on self-reported intent to vote may look "very tight" but are usually not, as respondents vastly overstate their true intentions. My guess is that the Basswood "very likely" percentage would be higher than the IPSOS, all things being equal, since it offers just three response categories to the AP/IPSOS ten. Regardless, if we assume that the Basswood question identified 60% to 70% of their registered voters as "very likely" voters, that would project to a turnout of something in the range of 160,000 to 190,00, which seems more than plausible.

[Update: I guessed low. According to Jon Lerner, 85% of the
registered voters they called said they were "very likely" to vote in
the special election. See update 2 below].

Of course, reasonable pollsters can and will quibble over how to select likely voters. Recent efforts to validate turnout on list samples have revealed problems with self-reported likelihood questions. But the notion that this particular poll was cooked, that it used leading questions, an unorthodox frame and an implausible likely voter model is not supported by facts available.

Update: I will gladly defer to others with more expertise on predicting turnout, but I probably should have set 200,000 as a high side (small-l) liberal guess at turnout. David Wasserman, the House Editor at the Cook Political Report who spends a lot more time thinking about these things, writes today that in the NY-20 special election earlier this year, "roughly half the number of voters who turned out for the 2008
presidential election showed up for the special election, which
suggests between 110,000 and 130,000 voters could show up for this race."

Via email, David adds, "anything above 150,000 is a pipe dream."

It is also worth considering the advice Twittered yesterday by NBC's Chuck Todd, who saw more than his share of partisan, congressional district polling in his years as Hotline editor:

Be very cautious of ALL NY 23 polling. Why? There's nothing driving turnout; figuring out WHO is going to vote is near impossible.

Which gets back to my larger point. Yes, caution is in order, but it's foolish to single out the Club for Growth/Basswood poll as somehow inherently implausible on such flimsy evidence.

Update 2 - Jon Lerner of Basswood Research emails:

In the CFG poll in NY-23, the percentage of those who were
contacted and screened out for lack of being "very likely" voters
was 15%. In my experience, in a likely low-turnout race such as an
off-year special election, using voter lists with vote history is far
preferable to random digit dialing. On the assumption that turnout in a
special general election will be far lower than turnout in a normal general
election, you want to begin with people with vote history. Still, even
that alone is not good enough, because many on-year general election voters
will not vote in an off-year special election (especially one in which the
highest office on the ballot is congress). So, we try to narrow the scope
further by including only those respondents who have vote history and who say
they are "very likely" to vote in the special election.
Although we do not screen further, my assumption is that even with that screen we
will have a small number of non-voters, as respondents tend to overstate their
likelihood of voting. Thus, while I believe the sample we derived for the
NY-23 survey was as accurate as can be, if it is off at all, it is likely to be
over-inclusive rather than under-inclusive.

[Typos and grammer corrected]

Popular in the Community

Close

What's Hot