Andrew Kohut is the President of the Pew Research Center and arguably the
dean of the survey research profession. President of the Gallup
Organization from 1979 to 1989, Kohut recently received the highest honor of
the American Association of Public Opinion Research's highest honor, their
2005 Award for Exceptionally Distinguished Achievement. He spoke with
Pollster.com's Mark Blumenthal last week about how the Pew Research
Centerwill measure voting intentions for the upcoming elections and
about the future of survey research.
Topic A - for just about everybody right now - is handicapping the races for control of the House and Senate. I'm sure our readers would be interested in your take. But I think perhaps of even greater interest would be what kinds of surveys and measures you are looking at and will be looking at over the coming weeks?
Well, we're going to do what we traditionally do in off-years and that is measure voting intentions for the House. Generally in off-years the pre-election polls do a pretty good job of estimating the popular vote for the House and we know that has a correspondence to the number of seats that each party has. In 1994 we were very fortunate that The Times Mirror Center, the center that preceded Pew, was among the first to say, "We've got a Republican plurality in the popular vote." We didn't have quite enough of a margin in the poll, even though the poll provided a very accurate estimate of the popular vote to flatly predict that Republicans would take over, but we described it as a high likelihood. We could have the same thing happen in this election. What I'm struggling with is that safe-seat redistricting has made the relationship between the popular vote and seats won by each party less than what it once was. And so we're going to have to try to make our estimates, taking into account the traditional relationship between seats and votes and how that relationship may have changed since the '90s Census was used to redistrict.
Will you be looking at any of the statewide surveys or congressional level surveys that are out in the public?
Well, I look at them just for the sake of trying to understand what else is going on out there, but what I learned from Paul Perry at the Gallup Organization was to not use ad-hoc judgments, but to focus on the survey measures that we use to estimate the size of the vote of the party or a candidate. So in the meantime we're concentrating on whether our turnout scale is working well, how the undecideds are likely to break, what the last minute trends are if any, and how stable are people's choices. Those are the things that are really most important to me. I'm not a handicapper, I'm a measurer. There's a difference.
Actually that's a perfect segue to another question I wanted to ask. Just before the 2004 election, as you well know, your final survey gave George Bush a three-point lead in the popular vote. And you did a projection in which you allocated the remaining six percent that were undecided about evenly and predicted a 51 to 48 Bush win, which turned out to be right on the nose exactly the way the popular vote broke. You wrote in that final report, "Pew's final survey suggests the remaining undecided vote may break only slightly in Kerry's favor." And I think you did a three-to-three allocation or something close to that. And I just wondered what you can tell us about the process you used to reach that conclusion then and what does it say about what you will do in the coming weeks?
Well, we do a couple of things. First, we throw out half of the undecideds because validation surveys show that they vote at very low rates. Then, we look at a regression equation that predicts choices based upon all of the other questions we have in the survey among the decideds and apply that model to the undecideds. We also then look at the way the leaners - that is the people who don't give us a choice initially - are breaking and make the assumption that the leaners are closer to the undecideds than to the people who give us an answer right off the top of their heads when we ask them it first.
I want your readers to know that we ask several questions, the first one is the flat out question where we ask where you lean, and we look at how the leaners break. We take those two estimates in mind and divide the undecideds. They are based upon measures. They're not based upon "you know I think," "I got this feeling," "history tells us," or any of this other stuff where you can let judgments get in your way.
What I learned from Paul Perry - and I keep going back to him because he taught me everything I know about this - is that what you should be prepared to do is to have a way of measuring all of the things that you're interested in covering and be able to look at those measurements in the current election relative to your experience in previous elections. And we try to do that. The one time I didn't do that was in 2002, because I was pre-occupied with other things. On an ad hoc basis, I kicked out one of my traditional questions out of the turn-out scale and it really hurt our projection. It made it too Democratic. I won't do that again. I chalk that mistake up to being pre-occupied with the first Global Survey that we were doing at the same time. In any event having said that, that's my philosophy and that's the way we will pursue it here at the Pew Research Center.
I'd like to take a more forward look at what trends you've seen developing in survey research. If you could try to imagine a world in ten or twenty years, how differently do you think the very best political surveys will be conducted?
I really don't know the answer to that. Hopefully somehow we're going to solve the problem of a sampling frame for online surveys, because I'm a firm believer that unless you have a sampling frame in which you can draw samples of people online, it's hard to do these post-facto weightings of people who opt-in to samples and make that work. I haven't seen it yet to my satisfaction. Obviously means of communication are so much more sophisticated and varied - the old land-line telephone will probably be a relic - so I don't have a good answer for you. I'm confident this is a practice that is pretty nimble and full of people who are survivors and will figure a way to cope with it. What that way is, I'm not sure.
I guess that takes me to one last topic. We've logged in over 1000 statewide polls in our database at Pollster.com, and more than half of the statewide surveys have been either automated recorded voice telephone (IVR) or Internet panel. And of the 200 or so polls that have been released on the House, about half of those have been automated. You spoke about the Internet panel problem and I wonder what sort of reaction you have to the explosion of automated recorded IVR surveys.
Well, I know they did reasonably well in one election. I would have to see them perform over a longer period of time. I'd like to see where they succeed and where they don't succeed. They always remind me a little bit of a New Yorker cartoon of two hounds sitting in front of a computer screen and one turns to the other and says, "On the internet they don't know we're dogs." One of the things that really bothers me about this is that we just don't know who we're talking to. And that goes to the very premise of the practice of sampling: you should know who you're talking to. In any event I will take a wait-and-see - I want to see more evidence before I come to some conclusion about it, other than my true discomfort with completion rates that low and not knowing firmly or clearly who you're dealing with.