I make no secret of the fact I loathe filling in Questionnaires for feedback/research, even though I usually take up the offer to do so.
Typically they are poorly designed and riddled with the questioner's own perceptions and slant - they have a particular set of things they want to measure, the questionnaire asks about those things in a way guarantees those end measures. Thus, the questions become a self-fulfilling prophecy as far as the meaning in the data is concerned.
Here's today's example which has irked me sufficiently to abandon the questionnaire part way through. It's from a retail website where I was looking to buy an iPad2 in time for Christmas. A few things about the experience were not great, though this organisation has some helpful people on twitter that gave me good suggestions; so I was keen to give balanced feedback.
But here's the question:
Again thinking about the main thing you were looking to buy, which one of these would you say was the most important in deciding which product you wanted to buy?
Perhaps for most products this question makes sense - purchase choices are made predominently on the basis of one or two of the above characteristics. However, not so with the iPad, or pretty much any Apple product for that matter. The unique selling point of Apple, it's very "value proposition" if you like, is that it beautifully combines all three of the above elements. I am shopping for an iPad because it marvellously scores in look and feel, technical specs and functionality in a way that most (all?) of its competitors do not.
As such I can't answer the question meaningfully - I'd be telling the researcher something they are expecting to hear, not something they haven't allowed for and that I want to say.
They could have chosen a different format for this question "which aspects most influenced your decision?" with multiple selections available. They would still get a distribution of answers that would allow the most significant result to be drawn out. But by forcing a decision of one answer, this is actually skewing the results and applying the researcher's pre-conceptions and prejudice about what data needs to be reported into the actual questions.
This is bad design and leads to misinformed statistics.