The latest issue of POQ has a meta-analysis by Bob Groves and Emilia Peytcheva that looks at the impact of nonresponse rates on nonresponse bias. It's primary finding is now familiar: we can expect significant nonresponse bias "when the causes of participation are highly correlated with the survey variables." In other words, if we do a survey on a topic and we mostly get respondents who are interested in that topic then our results likely will be different than if we also had those respondent who were not interested in the topic and therefore did not participate.
It bothers me that I am unsure of how to think about the implications for market research. In these days of abysmal response rates and online panels it is becoming increasingly common to make surveys more attractive to respondents by disclosing the survey topic when we recruit. One panel company, Survey Sampling, even offers a service that as near as I can tell matches people from their panel to the survey topic at the time the sample is drawn. In many surveys we screen respondents so that we only get people who already own a certain products or plan to buy certain products in the near future. Is this a good thing?
The answer probably depends on what a client is trying to learn and what business decision is driving the research. So if we do a survey on, say, health insurance products and mostly get people who are interested in health insurance, is there the possibility that the client might design products that are less successful in the marketplace than if we had interviewed a more balanced sample? The purist in me argues that it always is best to get as broad and representative sample as we can get but this seems to get tougher and tougher all the time. Maybe the best we do is be sure that we understand the impact of so called "topic salience" on nonresponse bias and be sure to raise it in our design discussions with clients.