Bookmark and ShareShare
Thursday, July 1, 2010

Daily Kos/Research 2000 Polling Controversy

Quite a bit of visibility is being given to the dispute between the website Daily Kos and the polling firm Research 2000. See here for the latest information on the dispute, including a copy of the lawsuit filed by Daily Kos. And here’s a review of some of the various blog postings on the situation over the last 24 hours.

Daily Kos hired Research 2000 to provide polling data over the last several years. Daily Kos now alleges that the research firm Research 2000 fabricated at least some survey results and did not actually conduct interviews. We certainly don't know all of the facts involved in this situation, other than allegations from one party involved and a brief statement of refutation from the other party (Research 2000). I’m sure more will come out in the days, weeks, and months ahead.

In a general sense, however, any public arguments about the veracity of polling data are of concern to polling professionals -- just as allegations of plagiarism are of concern to journalists, and allegations of data fabrication in published research are of concern to scientists.

Allegations or questions about the integrity of a number of different types of scientific data are unfortunately not new. (See David H. Freedman's book Wrong for an interesting review of many situations in which scientific data, much of it in the medical field, has been shown to be false or fabricated.)

The consumer nowadays has a tougher time isolating the quality of news and information. This is a result of the increased availability of news and information, exacerbated by the advent of more polarized writing and programming in which the motivation of the author/presenter is to push a point-of-view rather than to report neutrally or scientifically.

There has certainly been a demonstrable increase in the amount of polling information available to the average consumer. This is good as far as that goes. I'm gratified that people are interested in poll results and that editors and producers find them a valuable part of their news universe. Indeed, the reason that Daily Kos commissioned Research 2000 to conduct polls in the first place was presumably because Daily Kos felt that polling would be of great interest to its readers.

The proliferation of polling means more polling firms are entering the field.  That in turn can increase the probability that polling is done poorly or not done at all.

I'm often asked how the world can assess the quality and value of polls and polling data. That's a tough question to anwer -- in part because problems with poll data (and in particular their interpretation) often occur even when actual polling methodology follows usual industry practices.

Still, I would emphasize the ultimate responsibility which rests with the entity commissioning or releasing poll data, just as a newspaper or broadcast outlet has the ultimate responsibility for what it releases or publishes. The current controversy revolves around a client-contractor relationship between Daily Kos and Research 2000. It is unclear what procedures Daily Kos may or may not have used to verify and check the data it received from the survey research firm it employed (Research 2000) before publishing it. (Daily Kos ultimately, it says, fired the research firm). Nevertheless, in general, a news or web outlet has an obligation to check and verify what it puts out. This is often easier said than done, of course.  A number of publications have been burned in recent times when outside contractors or freelance writers have not followed standard journalistic procedures.

Once poll data are publicly released, there are more general procedures for monitoring its quality. The American Association for Public Opinion Research, of which I am president this year, has a specific list of "Standards for Disclosure" -- based on the assumption that "Good professional practice imposes the obligation upon all survey and public opinion researchers to disclose certain essential information about how the research was conducted."

There is no requirement that researchers adhere to this code, but AAPOR will police and call to the attention of the world instances in which, when requested, that information is not forthcoming. Importantly, AAPOR has a new "Transparency Initiative," spearheaded by last year’s AAPOR president Prof. Peter Miller of Northwestern University, which would allow polling firms to voluntarily commit to releasing methodological and other essential information about its polls.  The idea is that releasing this type of information provides interested parties with more of the wherewithal to make judgments about the quality and value of poll data.

Again, terrible though the situation may be when a research firm might be found to have falsified poll information, I think the more prevalent challenges with the public's use of polling information have to do with things such as question wording and misinterpretation of findings -- even when the survey per se is methodologically sound.

The world as we all know has moved into a more information intensive environment. This is true across the board, not just when it comes to polling information. But in this new environment, there can be increased difficulty in isolating excellence, truth, rigorous methods, integrity, and honesty in the information and data that are disseminated by all of the various information outlets that confront us. Polling data are extremely important in a democracy.  We find increasing interest in polling on a variety of fronts. With this increased interest comes increased chances of substandard polling data entering the news and information stream -- and an increased need for vigilance by those who use it.

3 comments:

MidPointMan said...
July 2, 2010 at 6:14 PM  

If you follow the arguments closely, the data is stacking up in favor of R2K having completed the polling and simply used minor rounding techniques to make the crosstabs fit the topline number as accurately as possible.

This is no evidence of fraud.

Having completed numerous regressions on the crosstabs, I have derived that there are stable weightings in the tabs on Age, Gender, Race and Region.

This could be the result of highly stratified sampling or a multidimensional weighting procedure.

The only weight that varies over time (observed by examining the residuals) is PartyID.

This is expected, and fits the pattern of a legitimate set of crosstabs.

All of the anomalies thus far appear to result from a normal QA process on the crosstabs to ensure they total to 100% and the average of the breakouts (Male vs. Female) equals the average of the topline number.

There appears to be an unfortunate feeding frenzy underway based on some very amateurish analysis.

Any competent researcher would know these patterns could be the result of weighting and rounding QA processes.

Research 2000 is guilty of one thing in my mind, and that is using very biased weights.

The stable weights I derived heavily leaned in favor of Democrats and minority voters.

Given the sponsor of the survey, this is hardly a surprise.

No fraud in my opinion, but simply selling what the client (a far left wing blog) ways to buy.

Anonymous said...
July 22, 2010 at 5:55 AM  

I guess they get what they pay for

Anonymous said...
August 1, 2010 at 1:18 PM  

The lack of disclosure from Research 2000 is, at least to me, the most damning element.

If they hired a call center to perform the calls, simply produce the invoices showing the calls were bought and paid for.

If the calls were done internally, produce the hard drives upon which the recordings of the calls exist.

Either or both of these things would be trivial to do, but, to my knowledge, R2K has done neither.

Post a Comment

Comments are moderated by Gallup and may not appear on this blog until they have been reviewed and deemed appropriate for posting.

Copyright © 2010 Gallup, Inc. All rights reserved. | Terms of Use | Privacy Statement