Monday 13 May 2013

A Monte Carlo approach to asking questions


In the early days of the internet in designing websites you would often have a discussion with clients about routing to different pages and setting out which link should take you to which page and after that to which page. Navigating a website in the early days was like going through narrow tunnels and then you had to back out of them to get to anywhere else. Then some bright spark realised you could have more than one linkage point to each page on more than one page so you could navigate from one part to another more easily.

I make this point because I think we have a similar degree of tunnel thinking when we write surveys, in that we only ever think of asking a question in one way. What I would encourage you to think about is the opportunity of asking questions in more than one way.

How often do you struggle to pin down the exact wording of a question in a survey and be in two minds how to word it? Rating something is a classic quandary. Do you ask them how much they like it; how appealing is it; how keen are they to buy it; how much better or worse it is than other things etc. Asking people to give open ended feedback is another area where a possibly infinite way to word a question exists, and I have had a career-long obsession about the best way to word this type of questions. For instance, if you want to ask for feedback about a product you might word it "please tell us what you like or dislike about this product" or "what do you think about this product? what do you like or dislike about it" or "if you were in criticising this product what would you have to say" or "what is the best thing about this product and the worst thing" . Everyone answering these questions will respond in a slightly different way. Some will deliver better answers than others, some will work more effectively with some groups of people than other groups. Some may not deliver the same volume of feedback but more thoughtful responses. Some may trigger more thought than others.

OK, so the survey has to go live today and you don't have time to test and you are not sure which wording will generate the most feedback; what do you do?

The approach most people take is to pick the one wording you think is best or the one a small committee of you think is best. But have you ever thought about just randomly asking this question in every single conceivable different way to reach respondents and then mashing up all the answers.

Now, I have been playing around with doing this of late. It's not difficult to do from a technical point of view and I am really loving the data I get back (sorry not sure if you are supposed to love data or if that phrase is appropriate).

What I am finding is that in closed rating questions, asking a question in a random basket of ways appears to deliver* more stable answers that iron out the differences caused by question interpretation effects, and for open ended questions it appears to deliver* a greater range of more nuanced feedback than asking a question one way.

I would described this as a Monte Carlo approach, because that is essentially what this is; what I am doing is netting out mass random predictions of the best way to ask each question. I have no way of knowing which is the most accurate, but netting out their predictions is more reliable than asking the viewpoint in one single dimension.

What do you think? I appreciate I probably need to back this up with some solid research evidence as there are lots of issues here and so I am planning to conduct some larger scale experiments to test this theory more thoroughly. But before I dive in, I am open to some critical feedback.

3 comments:

  1. Hi Jon,

    inspiring idea. The most obvious issue I see is that you will run into practical problems as soon as you start cross tabulating the results of one of your monte carlo basket question with other variables. So at least until your approach is common sense this approach would drive up the number of interviews needed. Either to convince your audience or to convince yourself. Properly both ;)

    To me this approach seems reasonable in cases where you want to measure a certain value as exactly as possible as you reduce the designer bias. For benchmark studies the current standard approach is still fine, if all benchmarked items suffer the same level of question-bias. So you would have to prove that e. g. persons who are more like the questionnaire designer tend to answer differently than persons of a different personality type

    HTH and looking forward where this idea leads!

    Jan

    ReplyDelete
  2. Hmmm. Let's assume that there is a best way to ask the question which you just do not know. Then you have two ways of going forward:
    A: Randomly pick a question and only use that one. (The standard way.)
    B: Randomly ask all types of questions.

    The expected value of both ways is the same. The difference is in the variance: If you happen to pick the best question, you get the optimal result, if you happen to pick the worst question, you get the worst result. B gives you an average result by definition.

    ReplyDelete
  3. We have different capstone project topics that are very useful in some particular subject. This is what we believe to have more and more.

    ReplyDelete