07/21/2010 03:09 pm ET Updated May 25, 2011

The Perils Of Polling D.C. Elites

On Monday, Politico published two new surveys, conducted by pollster Mark Penn, that compare the views of ordinary Americans to “elites in Washington.” The story concluded that D.C.’s elites “have a strikingly divergent outlook from the rest of the nation,”:

Obama is far more popular while Palin, the former Alaska governor, is considerably less so. To the vast majority of D.C. elites, the tea party movement is a fad. The rest of the nation is less certain, however, with many viewing it as a potentially viable third party in the future.

The survey also reveals to a surprising degree how those involved in the policymaking and the political process tend to have a much rosier view of the economy than does the rest of the nation -- and, in some cases, dramatically different impressions of leading officeholders, political forces and priorities for governing.

But do D.C.’s elites have different views because of their proximity to power? Or are those differences inherent in the demographics used by this survey to define the “D.C. elite”?; Let’s take a closer look.

That label “elite” can mean a lot of things. In this case, it means more than just members of Congress, their staffs and senior political appointees in the executive branch. Rather, this poll intended to measure the larger D.C. political milieu, the upper income portion of D.C.’s “governing class.” Here is the description from Monday’s story:

To qualify as a Washington elite for the poll, respondents must live within the D.C. metro area, earn more than $75,000 per year, have at least a college degree and be involved in the political process or work on key political issues or policy decisions.

Now this point may seem like nit-picking, but there is a difference between the thousand or so individuals who wield real power and influence in D.C. and the much larger group — probably numbering in the hundreds of thousands — who live in the region, have a college degree, earn $75,000 a year and describe themselves as somehow “involved in” politics or policy. That larger group is no doubt far easier to survey, and it may well provide a decent surrogate for the attitudes and worldview of the smaller and more powerful few, but it is different.

Next, consider that the “strikingly divergent outlook” of D.C.’s political elites as measured in this survey may owe as much to their socioeconomic status and partisanship (as defined in this survey) as to their proximity to Washington policymaking. A quick check of the cross-tabs for the Penn/Politico general population sample, for example, shows that better-educated and higher-income adults nationwide tend to be more optimistic about the economy, feel more insulated from the effects of the economic downturn and are more convinced that the Tea Party “is a fad” (to name three).

Also, the 227 respondents identified as D.C. elites give Democrats a two-to-one advantage (51% to 26%) on party identification. That is probably an accurate reflection of D.C.’s upper middle class political milieu — which is certainly different from the nation as a whole — but it also helps explain some of the observed differences in attitudes toward the Tea Party, political leaders and issue priorities. Again, the cross-tabs show that among all adults sampled nationwide in the Penn/Politico survey, Democrats were more likely than Republicans to say the nation is headed in the right direction (51% vs 7%), to consider the Tea Party a “fad” (39% vs. 17%) or to rate President Obama favorably (84% vs 16%).

I wonder how different the “D.C. Elites” would look compared to “elites” nationwide with comparable demographics and partisanship (i.e. with college degrees and incomes over $75,000, weighted to show a 2:1 Democratic advantage)? Maybe socioeconomic elites in Washington are not all that different from similarly situated elites nationwide.

Finally, an important postscript: Both surveys were conducted “online.” In this case, I won’t condemn Penn and Politico for conducting an online survey (though many of my pollster colleagues would), mostly because polling a “rare” population like “D.C. elites” would be prohibitively expensive using more conventional methods. But I wish Politico would have at least offered a sentence or two to describe the methodology and acknowledge that the “science” of online surveys remains a subject of debate among pollsters.

Let me try to compress that debate to a few paragraphs. Unlike most conventional telephone polls, which begin with a random sample of telephone numbers or registered voters, online polls begin with non-random “panels” of Americans who agree to complete surveys online. They are typically recruited using banner advertisements on web sites and usually receive some form of token financial compensation for each survey they complete. Online pollsters then use various methods (usually statistical weighting) to try to transform the completed interviews into a representative sample of a larger population.

How well do the adjustments work? The few independent efforts to assess the accuracy of online polling against known benchmarks tells us that online polls are less accurate, although the degree of accuracy probably depends on the application and can be hard to predict. Some argue that online panels should never be used to estimate “population values,” others consider the observed differences in accuracy small relative to reductions in survey cost (for more details see my two columns on this subject written last year).

(Past interests disclosed: My website, Pollster.com, was owned and sponsored by an Internet polling company, YouGov/Polimetrix, until two weeks ago, when it was acquired by the Huffington Post).

Generalizations aside, the Politico articles offered no real description of how the poll was conducted, so I emailed Mark Penn to ask for more detail. He tells me they used the e-Rewards market research panel. They weighted the general population sample by gender, age, education and race to match Census estimates (“within 2 percent”). However, they did not weight the D.C. elite sample beyond screening for the “key criteria listed of college education or higher, 75k of income and selected occupation levels.”

All of this leaves me with two final questions: How many college educated, upper-income D.C. policy and political wonks “earn e-Rewards Currency just for sharing [their] opinions?” And if the D.C. elite that are part of the e-Rewards panel have characteristics or opinions that differ from those that are not, how would we know?