I am a huge fan of your work and deeply appreciative of all the effort you and your staff have put into making pollster.com one of the best political sites on the Internet.
I do have to confess, though, to being deeply disturbed by the debacle with Strategic Vision. The fact is that there have been problems with the shop for years, yet little attention was paid, even while respectable bloggers (such as electoral-vote.com) made the call in 2004 to stop reporting SV's numbers as they were consistently, and suspiciously, pro-GOP. SV appears to me to be a very bald-faced effort to gratuitously influence national and local debates through nefarious means, and could have seriously damaged the reputation pollsters have worked so hard to build over the preceding decades. Even worse, Strategic Vision was enabled by people who damn well should have known better, like yourself.
Your site is a one-stop shop for journalists, pundits, Administration officials, etc. and anything that gets reported by you is magnified because of that. Moreover, these people do not have the time or training to effectively evaluate polls. As such, you have a responsibility to ensure methodological rigor is adhered by the pollsters whose results you report, and you must begin to call out anything from consistently being an over-the-top outlier to having an uncommonly large (such as Kaiser) or uncommonly small (Fox) party ID spread. I am not even saying to stop reporting polls like Kaiser or Fox, simply make it clear that there are methodological hang-ups with the data that your readership should be aware of. Your "general philosophy" of reporting results as long as the pollster "purports" to adhere to methodological basics is at best lazy, at worst, dangerous. Like it or not, websites such as yours have become such powerful aggregators of information that you must impose some kind of control to limit the ability of the mendacious and malicious from having an undue influence. You must be a Wikipedia, not a Google.
I agree with DG's general argument: Sites like ours need to do more to help readers evaluate individual pollsters and their methods. That was the spirt of the three part series I wrote in August titled, "Can I Trust This Poll," and the reason why I want to use our site to actively promote better methodological disclosure by pollsters.
That said, I'll cop to "lazy" in just one respect: On Monday, I gave short shrift to our "general philosophy." I combines two goals, (1) making all poll results available and (2) providing an informed and critical context -- through interactive charts and commentary -- for understanding those results. The best examples are the interactive tools we built into our interactive charts (the "filter" tool and the ability to click on any point and "connect the dots" for that pollster) to make it easy to compare the results for any individual pollster to the overall trend. We have also devoted considerable time to commentary on pollster house effects both generally and for specific pollsters (like Rasmussen).
I'll also take issue with the idea that we "damn well should have known better" with respect to Strategic Vision. The evidence that they were a "consistently over-the-top outlier" relative to other pollsters is weak. This was Charles Franklin's take three years ago:
I tracked 1486 statewide polls of the 2004 presidential race, of which Strategic Vision did 196. The Strategic Vision polls average error overstated the Bush margin by 1.2%. The 1290 non-Strategic Vision polls overstated KERRY's margin by 1.3%. Further, the variability of the errors was a bit smaller for Strategic Vision than for all the other polls combined.
Try the connect-the-dots-tool on the 2008 Obama-McCain charts for Pennsylvania, Florida, Georgia and Wisconsin (the states where Strategic Vision released five or more "polls"), and make your own judgements for 2008.
But again, I tend to agree with DG's central thrust. We can do better. I am particularly intrigued by DG's comment about being "a Wikipedia, not a Google." What Wikipedia is about, for better or worse, is "crowdsourcing." A few weeks ago, the Wall Street Journal described crowdsourcing as the idea that "there is wisdom in aggregating independent contributions from multitudes of Web users." How might a site like ours help individuals collaborate on efforts to evaluate pollsters? If you have thoughts or suggestions on any of this, we would love to hear them.