The room is full here at AAPOR and mostly I suspect to hear a presentation of Pew's comparison of the results from a dual frame (landline plus cell) telephone survey and Google Consumer Surveys. There is no shortage of people I've talked to here and elsewhere who think that Pew was overly kind in characterizing the differences. So it will be interesting to see how this plays out. Granted, it's back to keeping score, but I can't resist watching.
Scott Keeter is doing the presentation, and already I feel better. (I'm sure he didn't mean it as a joke, but Scott started by describing Google's quota sampling strategy as based on Google knowing "something about users.") More seriously, he is positioning this in a fit-for-purpose framework.
Scott has shown a chart that estimates the mean differences across 52 comparisons to be 6.5 points. Not awful, and not great. Some topics seem to work well, but others do not. The problem, of course, is that there is no way at the moment to figure out when it will work and when it will not.
He says that Pew will continue to use them, but not for point estimates. It seems useful for some question testing and quick feedback to help with survey design. Hence the link to fit-for-purpose. But hardly a game changer.