Previous month:
November 2008
Next month:
January 2009

Posts from December 2008

More on the cost of interviewing cell phones

Pew has just released a new study that confirms some of our own experience on the cost of interviewing respondents on cell phones.  Overall the report is about the impact of including cell phones in pre-election polls during the last presidential cycle, but at the end of the report they go into some "practical considerations" that are very interesting:

  • Contact rates, cooperation rates, and response rates were nearly identical between landlines and cell phones.
  • The eligibility rate (a.k.a incidence) was only 55 percent for cell phones compared to 87 percent for landlines.  The difference was due mostly to reaching a large number of cell phone users under the age of 18.
  • On average, cell phone interviews were about two and a half times as expensive as landline interviews.  They divide that cost into four buckets: (1) 30 percent due to additional screening; (2) 30 percent for a $10 reimbursement; (3) 20 percent for manual dialing; and (4) 20 percent for additional labor for tracking, management, etc.

There also is commentary on the likelihood of people who are currently cell phone mostly transitioning to cell phone only, and it's not especially encouraging for the future of telephone research.

To the above I should add there there is a post on  Research-Live.com  that reports the finding as four and a half times expensive, atlhough the referenced Pew report seems to say two and a half.  I'm not sure how to reconcile.


Is the online data quality issue fading?

So asks Inside Research in its December issue.  The cited evidence is from the latest so-called "Research Industry Summit" hosted by RFL Communications where attendance was off steeply from previous conferences and there just was not a whole lot new being said.    For some while now we have been seeing panel vendors and researchers alike focused on the major abuses that the industry has been talking about for three years now--false panel registrations, false qualifying for surveys, heavy survey taking, speeding and satisficing, etc.  All of these issues are being pretty effectively addressed across the industry which in turn is putting a lot of people's minds at rest. 

But is that enough?  The obvious remaining question is whether these problems are the real problems or are there more fundamental issues with online panels that has produced the kind of inconsistent results that started the discussion?  I suspect there are and that they soon will evidence themselves.    The industry continues to treat panel sample much like it has always treated probability samples.  Until we recognize that they are a much different beast and that we need a whole new way of dealing with them I believe we will continue to see inconsistent results that will cause clients to doubt their validity.  There is much work yet to done.


Top 10 IT Stories of 2008

 

For about the last 25 years or so I like to pronounce with boring regularity that the survey business (and by extension the research business) is increasingly about smart, strategic use of IT. In that vein I have reproduced below CIO Magazine's Top 10 IT Stories of 2008. They have better graphics.

  1. It's the Economy, stupid. The financial crisis and recession created the lens through which all other stories—including IT stories—will be viewed, looking back and looking forward.
  2. The first Internet Campaign. Barack Obama integrated technology into every phase of his campaign, leveraging the Net as a tool not just for communications but organization. Politics, governance, and business will never be the same.
  3. The Smartphone Revolution. As iPhone mania swept the consumer market and began to seep into the enterprise RIM pushed back with its hot new models. The age of mobile computing is here.
  4. Forecast: Clouds. Cloud computing became so mainstream that Dell tried to copyright the term. IT shops, big and small, stopped asking if the cloud was real and started integrating it into everyday operations.
  5. Vista Missed-a. Early disgruntlement at Microsoft's latest operating system didn't go away, but after another lukewarm year, it looks like Vista will.
  6. 2.0 Everywhere. Most companies still aren't so good at public facing blogs and social networks, but the technology has permeated the culture, inside and outside the firewall.
  7. The Greening of IT. High energy costs made Green the color of the year
  8. Globalization and its discontents. Business slowed for big outsources, who responded by moving up the value chain and purchasing western companies; terror attacks in Mumbai made companies wonder how safe the flat world really is.
  9. Media unravels. The Net continued to remake the media marketplace, with major implications for advertising, marketing, and corporate communications.
  10. Big deal. HP swallowed EDS to create a global competitor for IBM.

Take heed and plan 2009 accordingly.


Cultural bias in global surveys

This is a huge topic and an area where there is a lot written but little definitive being said.  And so I read with interest a little blurb in the current issue of mySSIOne of the articles reports on research by Nielsen UAE which seems to show that people in some countries are culturally disposed to respond more positively to survey questions than those in other countries.  The example at hand is a product test of some new soap products in which respondents were asked to indicate their likely intention to buy.  Just 16 percent of UK respondents answered "Definitely" compared to 50 percent of Indians and 58 percent of Brazilians.  The researchers interpret this as a form of acquiescence bias, something I touched on in an earlier post.  To quote from the mySSI article: "Reasons for acquiescence bias include the presence of a collective culture featuring hospitality (wanting to please), negativity avoidance, polite and agreeable natures, together with economic optimism as a result of high growth in these economies."

The obvious quesiton is whether this bias can be corrected in questionnaire design and on that topic there is considerable disagreement.  There are those who argue that some scales work better than others, while some argue that scales are inherently flawed for cross-cultural research and hence the drive toward MaxDiff.  Devoted readers may remember some experimental work we did on this last year that indicated it may not be the great device its supporters have made it out to be.  That aside, there is some interesting work on scales in global research that I need to get more on top of.  Stay tuned.


Some changes to The Survey Geek

I've made a few changes over the last few months that I want to bring to your attention:

  1. A while back I added a list of related sites where readers might find other stuff of interest.  There is a mouse over function that displays a short explanation of why the site might be useful.
  2. I have replaced that boring old Web 1.0 category list on the left with a trendy Web 2.0 cloud in which the font size represents the number of posts in that category.
  3. I've just added a search bar on the upper right side.  This is quite cool in that you can search the archives as well as current posts.  On the search report page you will see tabs for the blog search as well as "network" and "Web."  The former refers to hits on one of the sites in the "related sites" list and the latter to the whole Web.  Unfortunately, it doesn't work as well as I had hoped and sometimes does not bring you to the exact post you are looking for.  You may need to do a text search on the page to get to the right search item.

Hopefully loyal readers--few as  you are--will find these useful.