Bookmark and ShareShare
Friday, March 8, 2013

More on Pre-Election Polling

Mark Blumenthal at The Huffington Post has written again about Gallup’s methodological work in our 2012 pre-election polling -- albeit rather belatedly from our original announcements about our review plans in late January. In fact, the New York Times (on February 4) and others long ago let their readers know that we are conducting the review in conjunction with Dr. Mike Traugott of the University of Michigan.

Mark has written a number of posts about Gallup now, but we haven’t seen a lot from him about other pollsters whose final election estimates were similar to Gallup’s. We would hope to see Mark make a similar investment of effort looking at the broader pattern of pollsters’ under-representing support for Obama in 2012 -- something that NCCP and others have documented.  We are all dealing with very complex sampling, weighting, and measurement issues that involve levels of judgment, as well as a scientific basis for how they are executed. Gallup is transparent about what we do. So far, Mark hasn't provided his readers much context by evaluating exactly how other specific pollsters handle such issues as cellphone sampling, weighting, and likely voter procedures.

But, regardless of what others do, when we are in the business of pre-election polls, we want them to be absolutely in line with the final vote outcome. That’s our goal.

The issues Mark points out in his piece are only some of the ones we outlined on our list of focus points back in late January (see here), and I don’t see much new information to shed light on the issues involved. As he reminds us, our team will be releasing a thorough review of our analysis as soon as it’s ready. That will give both Gallup and other public pollsters plenty of time to consider the implications for future election polling.

On the issue of likely voter models, we are, of course, highly aware of the importance of these procedures and, in fact, did make changes to the way in which we used the model this past November. We, along with -- I’m sure -- many other pollsters, are carefully reviewing the implications of these models in a world with changing voting procedures and environments. That’s one of our big focus points.

Mark states the obvious in saying that many U.S. households are not included in published phone directories, a fact that, of course, we were and are highly aware of, but the relevant focus point is that very few of these unlisted numbers are landline only, and therefore not covered in our 50% cellphone interviews. Plus, Mark may not be aware that SSI's landline listed sample includes electronically listed numbers, including VOIP numbers, so the sample is not the same as "published phone directories." Still, as we pointed out, the issue of the RDD samples is a very important one, and we are running a full side-by-side RDD vs. listed experiment for landlines -- one that will provide factual information on this issue, rather than conjecture.

The weighting implications of the sample design are significant, but Mark doesn’t note that by using 50% cellphones, we most likely weight our cellphone only and cellphone mostly portions of the sample LESS than is necessary for other national pollsters who have smaller percentages of cellphone interviews in their sample. Second, the weighting implications of the nature of the landline sample is a very small, non-complex step. Conjectures about “heavier” weighting, as is true for most of the conjectures in the piece, need to wait until we have a full review of the implications of all aspects of weighting in pre-election polling.

Mark points out several things in his piece that are certainly worth highlighting. For one, he notes that we at Gallup have been continually modifying our sampling and methods, with changes instituted as we go, including additional changes on Jan. 1. As he notes, our tracking of presidential job approval is in line with the average of other pollsters this year, if that is an appropriate standard to look at. He also does note in the piece that the industry as a whole underestimated Obama’s popular vote performance in 2012, suggesting some generic issues facing the industry that are not specific to Gallup.

I would reiterate that no one is more involved in reviewing how pre-election polling is done than we are, including our current review of a lot more than the few points Mark speculates on in his piece.


Post a Comment

Comments are moderated by Gallup and may not appear on this blog until they have been reviewed and deemed appropriate for posting.

Copyright © 2010 Gallup, Inc. All rights reserved. | Terms of Use | Privacy Statement