Friday, 27 June 2014

One market researchers viewpoint on the World Cup



So England got dumped out in the first round of the World Cup and everyone in our country feels disappointed, an emotion we are quite used to feeling. So begins a round of postmortems that we all probably secretly enjoy as much as the competition itself, working out who to blame for the team’s failure.  In past World Cups this has been quite easy: for example David Beckham kicking a player and getting sent off, having a turnip head as a manager or a lack of goal line technology.  But this year we are all fairly universally perplexed. I have read a lot of overfit analysis, none of which is particularly convincing because, well, in the scheme of things we all thought we played quite well, we had a sparky young team. It seems like we were just a bit unlucky this time round.

The role of randomness

Its quite hard to accept the role that randomness plays in the outcome of world cup matches.  Every nation when they get kicked out or fail to even qualify probably believes their teams were "unlucky" and that their teams are better than they actually are.  So what is the relative importance of  luck v skill when it come to winning the world cup?

Unlike the premiership where there are 38 games over which time the performance of the teams is largely correlated to the quality of the squads (take a read of the Number Game* by Chris Anderson and David Sally)  performance of a world cup squad cannot be calculated by the aggregated skill value of the squad there is a lot more randomness involved.  Imagine if the premiership only lasted 3 games: in two out of the last four seasons the team that one the premiership might have been relegated.

*a must read if you are a market researcher and like football!

There is another factor too, in the premiership the best players get sucked up into the best teams hence the much higher win ratios between the top and bottom performing sides compared to the world cup where the best players are distributed more randomly and is proportional to the size of each footballing nation.  This in tern makes the outcome of international matches even more random.

Who influences the outcome of a match?

If you look at who has goal scoring influence across a team you will notice that the negative effects of causing goals a pretty well distributed across a team but the positive effects of scoring goals are a lot more clustered amongst some individuals. See chart below showing statistics from an imaginary team based on typical performance data taken from the premier league.
 

The potential performance of a world cup team must be measured not by the overall skill value of the team but the value of a smaller network of attacking based players who can make the most game changing contributions. In the case of players like Lionel Messi a single player can carry the whole goal scoring burden of a squad.  It only takes one or two randomly allocated star players in a world cup team to elevate its performance chances (think of Pele or Maradonna).

The performance of defence is more a case of luck. You might have one or two unreliable defenders who you may not want in your premier league squad because you know over the course of a season they may cost you a match or two, but at the individual game level and a world cup is based on the outcome of  three or four key individual games, the chances are a poor defender might well run their luck.   The other two important factors defenders have to contend with are the extra stress and lack of team playing experience of a world cup team compared to a premiership squad.  Without doubt stress plays a big part, players are really hyped up and there is probably an order of magnitude increase in tension which is the root cause of many errors in world cup matches. If you look at the defensive mistakes that cost us goals in recent world cups some of the biggest mistakes were caused by effectively our most reliable players, John Terry and Steven Gerrard and Phil Neville.  There is also a lack of formation practice to contend which is particularly critical for defence. How many hours of playing together does it take for a defence to gel? Most world cup squads have days rather than months to prepare.

A team like England might well have a higher aggregated skill performance average compared to other teams, but this does not result in the same reliable performance ratios that you see in the premiership. This is because over half the value is based on their defensive skill which can be completely undermined by bad luck and we don’t have a cluster of super-skilled players to elevate the team out of bad luck matches by scoring more goals than we let in.

The influence of the Ref

To win world cup matches you are much more reliant on the manager’s structural approach, the contributions from clusters of individuals who might form good attacking combinations and one other person – the REF!  Or rather, the ref in conjunction with the crowd and the linesmen.

If you analyse a typical game you will find that the number of major goal scoring decisions that are in the hands of the referee and linesmen are actually enormous compared to any individual player. It’s difficult to put a figure on it but let’s say on average there are about 6 decisions that could have affected a goal one way or another by the referee* its instantly obvious the relative influence they have on a match.

*That is a wisdom of the crowd estimate by asking a collection of football fans how many goal-affecting decisions are made in the match by the referee and linesmen, six was the median average estimate.


Now in nine times out of ten these decisions balance themselves out but refs are only human and so it’s no wonder why there is such a big home team advantage – with 50,000 fans screaming penalty it must be extremely difficult for refs not to be influenced by the crowd.  In fact you can almost put a figure on the influence of the crowd by comparing home and away goal scoring averages the home side gains an average 0.35 of a goal per game net advantage if you examine premiership games,  which can only be really down to the net contribution of the crowd/ref decision effects.

It’s no wonder as a result that there is such a disproportionate home nation advantage.  Effectively every home nation team is starting with a 0.35 goal lead, this advantage aggregated up over the course of a tournament  has means that nearly 30% of all world cups have been won by the home nation that is 10 times higher than chance.

Am I likely to ever see England win another world cup in my lifetime?

Is probably a question most England fans ask themselves. What does it take to win a world cup – how good do you have to be to override luck?  We have taken a look at this and run some calculations.

The chart below take a little explaining but it maps out a team’s skill level v the number of times it’s likely to win a world cup over the course of someone’s average football supporter’s lifetime of 72 years = 18 world cups.  If there are 32 teams in a word cup and you are an average team and your team qualifies for every world cup final the chances are you will win 1.1 world cups over your lifetime. If in you are England and only qualify roughly 80% of the time the changes will drop to 0.96.  If your team is twice as good as average, you are likely to win roughly 2 world cups and 4 times better 6 world cups.


 England have one one world cup, Germany three and Brazil five so does that mean we are average team and Germany are three times better than us and Brazil four times better than us?

Well essentially yes, if you look at the average game-win ratios of all the teams that have played the most regularly in World Cups v the number of World Cups they have won its pretty closely correlated at 0.91.    Germany has a three times higher win ratio than us and Brazil four times higher.


Now I appreciate there is some self-selection involved here – this chart should really be based on first round matches only for a totally fair comparison, but we don’t have that data. I think it’s reasonable to say though that England has not really been done out of its fair share of World Cups.  I think we have won as many as our teams aggregated performance deserves.  You might argue that some teams have been luckier than most: Italy certainly and others unlucky, Mexico should have won it twice by now based on their aggregated performance.

Roughly speaking that means for me there is only about a 40% chance I will witness England lift a world cup in my remaining lifetime but almost certainly I will have to endure another series of victories for Germany and Brazil.  Oh well better come to terms with it but I live in hope.

But lets fantasise for a minute, how many world Cups could we have won?

Imagine we lived in an infinite number of universes where for the last 70 years we had been playing an infinite series of world cups with a team with the same skill level.


Well on average 32% of the time we would not have won a single world cup by now, in 21% of cases  we might have picked up two and in 7% of case the same number as Germany.  There is one universe in 5,000 where Germany would not have won a single world up and England would have won 4!  Anyone fancy moving there?



Saturday, 21 June 2014

Tips on writing a good conference presentation

Are you fretting about putting together a presentation for an up coming event? Here are some tips based on the experience of sitting through a few and delivering a few and designing a few.  They are geared for market research but I suppose the thinking applies to any presentation.

Design
  1. Images really really do help make a good presentation - but read presentation zen to understand how to use them effectively
  2. Aim to present one thought per slide (you can break this rule if the slide is exceptionally well designed)
  3. Avoid at all possible costs bullet points - this is as much a philosophy as anything - "set your baysian priors to zero", OK you might be persuaded to let one or two creep in but don't start with a presentation that is 100% bullet points!
  4. Is your  main message tweetable? Think about what people viewing your presentations will being doing - some may be tweeting the content so help them by turning your headlines into tweetable messages
  5. Avoid video - don't fall into the trap of thinking adding a video will make your presentation more "dynamic" it usually dehumanizes your presentation.
  6. Don't fret on look and feel too much - yes do your best but, there will be sure to be better designed presentations at the event you attend - the story and how you deliver the content is far more important  - some of the best, most inspiring presentation I have ever listened to have looked awful from a design point of view (um...would it be undiplomatic to name Brainjuicer presentations as an example) - focus on the story and you will be fine! 
Structure

Now this might sound a bit pompous advice, but to write a good persuasive presentation I really do suggest you first read up on the basic tenants of Greek rhetoric, in particular Aristotles ideas of Ethos, Logos, Pathos, the 3 ways to persuade. 

You must start your presentation by establishing Ethos which is about building a bond of trust with your audience, then use Logos to which is about making a logic arguments and then end with pathos which is all about drawing out the emotions of the audience.

Often I see pathos being used wrongly at the start of a presentation e.g. kicking it off with some sort of cocky joke or dramatic video from which point everything else seems flat.  Drama and emotion must be saved until you have won the trust of your audience and won the argument then you use it to drive home your message.

Ethos is really about establishing some humanity and connection with the people you are talking to on some level and have written about it in this separate blog poste   

The logos part of the process is the most skilled and it is about identifying the key problems the audience might have on an issue and then outlining your solutions.  Never present a problem to an audience if you are not going to follow with a solution.

Tips for writing the story

  1. Your presentation must tell a simple story that you can recount in a basic elevator pitch
  2. To devise your story really roughly sketch it out first either in your mind, on a piece of paper 
  3. Try writing  your presentation as a story in excel - you will be amazed how effective this is at allowing you to coalescence you thoughts into a simple story - one line of the excel is one slide of your presentation. You can then once you got the basics down, really easily hack it around.
  4. Go on a walk and tell the story to yourself or tell it to yourself as you are going to sleep or driving to work and see if it flows cleanly
  5. If you get stuck telling the story to yourself you have what is known as a story knot - step back from it and try and tell it in a different way 
  6. Test market the story by trying to summaries what you are going to talk about to a colleague 

Content market researchers should avoid

  1. Don't pad your presentation with background stats - I don't need to know about all the details of the sampling techniques, we are all grown up market researchers - that's a given, just jump to the headlines.
  2. Avoid the genero advice trap:  we need to be faster, more insightful and cheaper!
  3. Check if your content fails the mobile phone growth statistics bleeding obvious test:  yes we know more people are now using mobile phones than own a toothbrush! Spouting any statistic we all could have a good stab at guessing or many in the audience has  heard before is wasting delegates time.
  4. If you are going to play to the crowd by highlighting one of the many short comings of our industries working practice, don't you dare propose a proprietary solution that you and only you can use. The audience will want to hit you! 
  5. Don't mention a problem without having a bloodly good solution to unveil that we can all grasp hold of.
  6. Don't tell us you have a solution and then not show us the details or an example
  7. If you are delivering a pure sales message about your great new piece of technology - come clean about it up front
  8. I don't need to be told about how great Daniel Kahneman's work is anymore or need to be explained what system 1 and system 2 is.
  9. Along the same likes, avoid the cliche buzzwords of the moment - in the naughties it was the word web2.0 which drove me mad hearing and this decades most important word to avoid using in any form of presentation is big data and I will let you make you mind up about all the rest
  10.  This is the year of the xxxx!!! mmmm... This is probably not the year of anything (and certainly not the year of the mobile!) Avoid protestation unless you are the chairman of the conference when that task becomes obligatory. 
  11. 100% recycling someone else's ideas already recounted in a New York Times best selling business publication is cheating!  

Technique
  1. Try and make me laugh at least once (or at very least a smile)
  2. Use simple examples: tell us your theory and then show us a real example could be seen as the essence of the structure of a good market research presentation
  3. Be prepared to go off screen - Never under estimate the value of a good prop! Most of the show stealing presentations I have seen have used something other than just a presentation to get their message across. 
  4. Admit your short comings and failures in a sandwich between your successes.  You could call this an integrity sandwich - we are much more open to hear about and believe in your achievements if you own up to your failures too. 
  5. Make it interactive - a quiz embedded into your presentation is the simplest way to do this but it can be tedious being challenge to guess the answer if there is no reward for doing so. Come armed with prizes if you are going to do this!
  6. If you are going to get people to do things, make sure its inspiring. It embarrassing to all stand up so if you are going to ask your audience to do this it better be fun or genuinely interesting
  7. Dress rehears your presentation the office first to your staff - they will benefit too by knowing what you are going to talk about 
  8. OK I suppose you better get your presentation spell checked too!  As a class one dyslexic spelling is a challenge to me and often the first feedback I get when I give a presentation is a polite aside about the spelling. To avoid this type of humiliation I do recommend getting your presentation spell checked by a third party.* 

*Admittedly this is not advice I always take myself to the fury of my marketing department and sales team as I would describe myself as a bit of a militant dyslexic and feel I have the human right to make spelling mistakes on my own content sometimes.

Friday, 20 June 2014

Baby steps into the wearable era of research: ESOMAR DD 2014 Roundup

Compared to other global research events the ESOMAR Digital Dimensions conference is by no means the biggest, it faces competition, without doubt, from more ‘ideas’ driven events, but never the less it is by far and away still my favourite market research event on the global calendar. Now I have to say that because I was chairman this year, but I do feel that despite all the competition, it has reliably proved to be one of the most fruitful sources of new thinking and new trends for the market research industry - I consistently learn so much more at this event compared to the others I attend and this year it was particularly fruitful.

I think that part of its success is down to the consistently high standards ESOMAR sets on paper submission, only 1 in 5 papers get selected and it also demands a lot more robust thinking from its participants. What you get as a result from this conference is a really thoughtful mixture of new ideas, philosophy and argued out science.

This year was one of the strongest collections of papers ever assembled, so much so that the selection committee asked to extend the prizes beyond 1st place. There were 6 major themes that emerged and 1 paper that I think could go on to have a major impact well beyond the boundaries of market research and I returned home with 23 new buzzwords and phrases to add to my growing collection (see other post).

The big themes

1. The Physiological data age: At this conference we witness some of the baby steps being taken into the world of wearable technology; and a prostration by Gawain Morrison from SENSUM who were one of the stars of the event, that we are about to enter the physiological data age.  They showed us a galvanic skin response recording of a 7 hour train journey which revealed the insight that the highest stress point on the journey was not caused by any delays or anxiety to reach the station but when the on-board internet service went down!  IPSOS are one of many MR companies to start experimenting with google glasses and showed us how they were using them to conduct some ethnographic research amongst new parents for Kimberly Clarke. We saw some wonderful footage of  a father interacting with his new born child in such a natural and intimate way it does not take much of a leap of the imagination to realise wearable technology is going to be a big topic in future MR events.

2. The Big Privacy issues looming over these new techniques:   With the rise of wearable devices raises a whole range of new issues surrounding data privacy that was widely discussed at this conference,  Alex Johnson highlighted in his award winning work Exploring the Practical Use of Wearable Video Devices, which won best paper, - the central emerging dilemma - it’s almost impossible to avoid gathering accidental data from people and companies who have not given their consent to take part in the research when doing wearable research. It’s critical for the research industry to take stock of.

3. Developing the new skills needed to process massive quantities of data:  The second big focus of this conference, that Alex Johnson’s paper also highlighted, was the enormity of the data evaluation tasks researchers face in the future, for example processing hundreds of hours of video and meta data generated from wearable devices.  Image processing software is a long way from being able to efficiently process high volumes of content right now. He had some good ideas, to process this type of data. He proposed a whole new methodological approach which centres around building taxonomies and short cuts for what a computer should look for and a more iterative analytical approach.  In one of the most impressive papers at the conference TNS & Absolute Data provided an analytical guide to how they deconstructed 20 million hours of mobile phone data to build a detailed story about our mobile phone usage, that could be utilised as a media planning platform for the phone – the research battle ground of the future is surely going to be fought on who has the best data processing skills.

4. De-siloed research techniques: I wish I could think of a better simple phrase to describe this idea as it was probably the strongest message coming out of the ESOMAR DD conference - the emergence of a next generation class of more de-siloed research methodologies, that combined a much richer range of less conventional techniques and a more intelligent use of research participants. Hall & Partners described a new multi-channel research approach that involved a more longitudinal relationship with a carefully selected small sample of participants where across 4 stages of activity they engaged them in a mix of mobile diary, forum discussion and conventional online research - challenging them to not just answer questions but help solve real marketing problems; Millward brown described a collaboration with Facebook where they mixed qual and mobile intercept research and task based exercises to understand more about how mobiles are used as part of the shopping experience;  Mesh Planning described how they integrated live research data with fluid data analysis to help a media agency dynamically adjust their advertising activity; IPSOS showed us some amazing work for Kimberly-Clarke that spanned the use of Facebook to do preliminary qual, social media analysis, traditional home based ethography, and a new technique of glassnoraphy. What all these research companies demonstrated was that decoupled from the constraints of convention, given a good open brief from a client and access to not just the research data that the research company can generate but the data the client has themselves we saw some research companies doing some amazing things!

5. Mining more insights from open ended feedback:  Text analytics in infancy focussed on basic understanding of sentiment but 3 great papers at the event showed how much more sophisticated we are becoming at deciphering open ended feedback.  Examining search queries seems to be a big underutilised area for market researcher right now and KOS Research and Clustaar elegantly outline how you could gather really deep understanding of people’s buying motivations by statistically analysing the search queries around a topic.  Annie Pettit from Peanut Labs, looking at the same issue from the other end of the telescope, showed how the suggestions to improve brands and new product development opportunities could be extracted from social media chatter by the careful deconstruction of the language they used to express these ideas.  And Alex Wheatley, in my team at GMI, who I am proud to say won a silver prize for his paper, highlighted just how powerful open ended feedback from traditional market research surveys could be when subjected to quant scale statistical analysis, rivalling and often surpassing the quality of feedback from banks of closed questions.

6. Better understanding the role of mobile phones & tablets in our lives: We learnt  a whole lot more about the role of mobile phones and tablets in our lives at the conference, some of it quite scary.  We had expansive looks at this topic from Google, Yahoo and Facebook.  AOL provided some useful “Shapely value” analysis to highlight the value of different devices for different tasks and activities, it demonstrated how the tablet is emerging as such an important “evening device” , its role in the kitchen and bedroom and how the combination of these devices opens up our access to brands.  We learn how significant the smart phone is when we go retail shopping for a combination of social and investigative research reasons. We learn about the emergence of the “Google shop assistant” many people preferring to use google in shops to search for their shopping queries than actually ask the shop assistants and how we use the phone to seek shopping advice from our friends and how many of us post our trophy purchases on social media.

The impact of technology on our memory

The paper that had the single most impact at the conference was some research by Nick Drew from Yahoo! and Olga Churkina from Fresh Intelligence Research showing how our use of smart phone devices is really impacting on our short term memory – we are subcontracting so many reminder tasks to the technology we carry around with us that we are not using our memory so actively and this was demonstrated by a range of simple short term memory test correlated with mobile phone usage found the heavier smart phone users performing less well. The smart phone is becoming part of our brain!  This obviously has much bigger implications outside of the world of market research and so I am sure we are going to hear a lot more about this topic in the future.

Scary thought, which made the great end session by Alex Debnovsky from BBDO about going on a digital detox all the more salient.  I am going to be taking one soon!

23 Buzzwords coined and used at the 2014 ESOMAR Digital Dimensions Conference

This years Digital Dimensions conference produced a particularly good harvest of buzzwords, some of which you may have heard before but some I can guarantee are 100% new!

The conference buzzword award has to go to John Humphrey Kimberly Clark & Joost Poolman Simons from IPSOS who in one presentation delivered more new additions to the market research vocabulary pool than I think I have heard in one hit. 

1. Glassnography: The new term for ethnography using google glasses Coined in presentation by: John Humphrey Kimberly Clark & Joost Poolman Simons Ipsos

2. Privacy:  The word probably mentioned more often at this conference than any other (source: various)

3. Fanqual:  Doing qualitative research amongst your Facebook fans by posting ideas and getting their feedback  Coined in presentation by: John Humphrey Kimberly Clark & Joost Poolman Simons Ipsos

4. Spammyness index = How Impersonal x how intentionally manipulative a piece of marketing communication is. Coined by Jacob White VisualDNA

5. Cupcake research: A derogatory term for the type of research that looks great, but is too sweet to eat and has little or no healthy substance.  Coined in presentation by: John Humphrey Kimberly Clark & Joost Poolman Simons Ipsos

6. Phylogenetics: A new very nerdy means of analysing social networks devised and discussed in a paper by OMD (don’t ask me to explain it!)

7. Amazonification: The observation of how some people are using Amazon as first point of call for researching and  purchasing so many different things (Note Ebay is used in a similar way by another sub strand of the population but ebayification doesn’t sound quite right!) Coined in presentation by: John Humphrey Kimberly Clark & Joost Poolman Simons Ipsos

8.Data exhaust: The data pollution that pours out systems that we cannot effectively use (source: various)

9. Shapely value:  The game theory technique devised buy Lloyd Shapely he won a Nobel Prize is emerging as a great way of segmenting activity usage in market research which was ably demonstrated by the team from AOL

10. The Big 5: The five key personality traits to understanding human behaviour: openness, conscientiousness, extraversion, agreeableness, and neuroticism. These are becoming the cornerstone of many standard research measurement techniques so they get a collective noun the big 5  Coined by Jacob White VisualDNA

11. The physiological data era:  Wearable technology is going to be the dominant new source of market research data over the next decade and a prediction that physiological data is going to become one of the key metric of market research coined by Gawain Morrison from SENSUM

12. Why analytics: Analysing big data to gain deeper insights into customers wants and needs (source various)

13. The army of influencers:  An observation of what it feels like to be an expectant mum for the first time and the bewildering array of advice you are offered  in the digital age.  Coined in presentation by: John Humphrey Kimberly Clark & Joost Poolman Simons Ipsos

14. Corporate purpose: In the social media age, companies are realising the collective power of consumers – business that simply pursue the need to make money and don’t  address  the actual needs of their consumers in the decisions can easily become a cropper.  To address this there is a growing trend amongst business to try and define their “Corporate purpose”  Coined by Jacob White VisualDNA

15. Share of experience:  Share of voice is a fairly meaningless idea in a wold where we are bombarded by messages in multi-dimensions.   We should focus on share of experience which is more about measuring what is cutting through.  Coined by Chris Wallbridge Mesh Planning

16.Renegade professionals:  “The future is in the hands of the “renegade professionals” So much innovation is happening outside conventional businesses and it’s the outsiders, the renegades that are inventing the future – very much on their terms!

17. Microboredom:  Those moments waiting in the queue when we noodle on our mobile phones. Mentioned by Alex Drozdovsky BBDO Worldwide

18. Technotots & digikids:  Terms to describe the proliferation and impact of technology on young children Mentioned by Alex Drozdovsky BBDO Worldwide

19.The google shop assistant: The phenomenon that many people out shopping would prefer to look up their query on their smart phone than ask the shop assistant. Google is becoming the shop assistant!

20. Selection over sample: New research techniques are going to be more reliant on having the right type of active participants than larger numbers of balanced sample.  Coined by Grant Bird Hall & Partners

21. The Zero moment of memory: We are using our smartphones more and more to replace our own memory. Coined by Nick Drew from Yahoo! and Olga Churkina from Fresh Intelligence Research

22. Digital Detox: The idea of decoupling ourselves from our digitial devices to allow us to unwind coined by Alex Drozdovsky BBDO Worldwide

23. Turning off is the new emotion: Another idea coined by Alex Drozdovsky BBDO Worldwide we are being so bombarded by data and information in  the new digital age that turning off is the new emotion

Tuesday, 29 April 2014

Surveytainment

This is a link to a presentation I delivered at the IIex in Amsterdam on Surveytaiment.  An exploration of how we could  make surveys addictive.

http://www.insightinnovation.org/2014/04/23/surveytainment-how-can-you-make-surveys-more-addictive/

http://www.insightinnovation.org/2014/04/23/surveytainment-how-can-you-make-surveys-more-addictive/

Saturday, 26 April 2014

Bonsai Survey Design


This blog post below is the first in a series I am writing to accompany a paper I am about to publish on Bonsai Survey design, which will be presented at the up coming ESOMAR Asia Pacific event in May 2014.

There is a pressing need in our industry to shorten survey to meet the needs of the mobile enable research consumer.  More and more people are wanting to complete surveys on their mobile phones, yet the average online market research survey we currently produce is too long and unwieldy to actually complete on a mobile device.  So surveys have to change to accommodate this. They have to become shorter, more engaging and better designed and this paper is a guide to how to navigate your way though this process.

We are also going to produce a small "Bonsai" book to accompany this paper which I am excited about.

We have recently analysed over 30,000 surveys that GMI respondents have been asked to complete over the last 18 months around the world and the average length of survey, and I stress the word average, is 20 minutes.   The attention span of someone using a mobile phone is around 5 minutes for any one activity.  So there is a big gap to close.  

I estimate that less that 1 in 5 surveys right now is adequately optimized for mobile phone completion.  So my big focus over the last few months has been to explore practical ways in which surveys can be shorten and to establish a framework that our client can use to address this issue.

I hope in these future blog posts to highlight some of the most effective technique that you can share with your own clients as to how to make surveys shorter and more effective.


How many people need to answer that question?




When we set up a survey we tend to think about how many people should answer the survey survey as a whole.  I would challenge  you to think a bit beyond this and think about how many people should answer each question in your survey, as there are potentially significant savings in survey lengths that can be achieved by optimizing the sample for each question.

A typical  survey might be sent out to around 400 respondent, in many respect this is a bit of an arbitrary figure,  the number of people who need to answer each question in any one survey to get statistically significant answer might range from anywhere between 5 and several thousand.

The number of people who need to answer each question is based on the level of variance in the answers respondents give.  Say I am testing 2 ads to find out which is better and the first 5 people I interview all prefer ad A over ad B, there is a 95% certainty that ad A is going to be preferred everyone, so job done.  If on the other hand I ask respondents to evaluate a brand on 15 different brand metrics using a 5 point likert rating scale and the score range from 3.0 to 3.5 (which is not uncommon level of differentiation) to pull apart the differences between these 15 metrics completely you would need to interview around 5,000 people.

In an average survey there are a range of questions between these two extremes and so it would make sense to stop thinking about your sample requirements for a whole survey but your sample requirements at a question level.

Now the problem is that it is difficult in advance to know exactly how much sample you will need for each question because to work it out accurately requires some data.  

The solution to this is a more iterative survey design approach, where you don't set your sample quota's until you have sample enough people to estimate the sample size requirements.   This can be easily done by instead of sending out your survey in one go, you send it out in 2 batches. You send out the first batch to what I would normally recommend 100 respondents, pause the survey, this will give you enough data to roughly assess the sample requirements for each question, you can then set quota's on each question for the second batch of sample.

Now there are obviously a few things you need to consider, for example how you are going to sub divide up the data for example if you are going to want to analyse some sub demographic trends in the data for any one question, e.g. compare men v women or look a age split for each of these groups you will need a minimum sample so you may need to double or even quadruple your basic sample requirements for some questions to account for this.

When you do this across a survey you get a chart like this example below:


In this example you can clearly see that there are some questions that require a lot more sample than others.

If you were say interviewing 400 respondents in total then some of these question you will already have enough data on from the first batch of responses and some of the others need only be answered by 1 in n of the respondents.  What this mean is that if you randomize at a respondent level who answers each question the survey overall gets shorter for each respondents.

So how do you actually work out sample sizes for a question?  

There is a relatively basic formula that you can use to calculate the minimum sample size for a question:

Minimum sample size = [(Standard Deviation x Z)/Acceptable Error]2

Z is the factor that determines the level of statistical confidence you wish to apply. For 90% I would recommend Z = 1.64, and for 95% Z= 1.96.

You can see from this formula its all related to standard deviation and the level of variance in the answers which is how you set the acceptable error  (in the brand example I quoted above, if the overall variance in answers in 0.5 and there are 15 metrics to differentiate them all the "acceptable error" would be around 0.03 (0.5/15) .







.

Sunday, 17 November 2013

The Power of imagery in surveys


Over the last few years we have conducted a number of experiments exploring the role and use of imagery in online surveys, mostly as part of more wider research looking at how to effectively engage respondents in research. A lot of it is hidden away across several papers and reports and so I thought I would try and consolidate this learning into one blog post. This is my guide to the use of imagery in online surveys.

Tuesday, 18 June 2013

Assessing the nutritional value of your survey?



Is the feedback from your survey the equivalent of a double quarter pounder burger with extra cheese and a side of fries?

Monday, 13 May 2013

A Monte Carlo approach to asking questions


In the early days of the internet in designing websites you would often have a discussion with clients about routing to different pages and setting out which link should take you to which page and after that to which page. Navigating a website in the early days was like going through narrow tunnels and then you had to back out of them to get to anywhere else. Then some bright spark realised you could have more than one linkage point to each page on more than one page so you could navigate from one part to another more easily.

I make this point because I think we have a similar degree of tunnel thinking when we write surveys, in that we only ever think of asking a question in one way. What I would encourage you to think about is the opportunity of asking questions in more than one way.

How often do you struggle to pin down the exact wording of a question in a survey and be in two minds how to word it? Rating something is a classic quandary. Do you ask them how much they like it; how appealing is it; how keen are they to buy it; how much better or worse it is than other things etc. Asking people to give open ended feedback is another area where a possibly infinite way to word a question exists, and I have had a career-long obsession about the best way to word this type of questions. For instance, if you want to ask for feedback about a product you might word it "please tell us what you like or dislike about this product" or "what do you think about this product? what do you like or dislike about it" or "if you were in criticising this product what would you have to say" or "what is the best thing about this product and the worst thing" . Everyone answering these questions will respond in a slightly different way. Some will deliver better answers than others, some will work more effectively with some groups of people than other groups. Some may not deliver the same volume of feedback but more thoughtful responses. Some may trigger more thought than others.

OK, so the survey has to go live today and you don't have time to test and you are not sure which wording will generate the most feedback; what do you do?

The approach most people take is to pick the one wording you think is best or the one a small committee of you think is best. But have you ever thought about just randomly asking this question in every single conceivable different way to reach respondents and then mashing up all the answers.

Now, I have been playing around with doing this of late. It's not difficult to do from a technical point of view and I am really loving the data I get back (sorry not sure if you are supposed to love data or if that phrase is appropriate).

What I am finding is that in closed rating questions, asking a question in a random basket of ways appears to deliver* more stable answers that iron out the differences caused by question interpretation effects, and for open ended questions it appears to deliver* a greater range of more nuanced feedback than asking a question one way.

I would described this as a Monte Carlo approach, because that is essentially what this is; what I am doing is netting out mass random predictions of the best way to ask each question. I have no way of knowing which is the most accurate, but netting out their predictions is more reliable than asking the viewpoint in one single dimension.

What do you think? I appreciate I probably need to back this up with some solid research evidence as there are lots of issues here and so I am planning to conduct some larger scale experiments to test this theory more thoroughly. But before I dive in, I am open to some critical feedback.

Saturday, 16 March 2013

5 nice questions to ask about your own survey

 1. Would you do it yourself?

 This has to be the key question you should ask yourself.  If you were sent your survey by someone else would complete it? Would you give it your full attention to every question?    If the answer is no, then in that case don't expect the average respondent to answer your survey properly either.  

2 Does your survey pass the presentation test?

This is a good way of looking at things.  If your survey was a presentation that you were delivering to a room full of 50 people how much more effort would you put into the design of it?  I bet you probably would want to add a few more visuals for a start to liven it up. Where would you add these images?  Would you change the flow of it to ensure it made sense? Would you trim back the text?  Would your presentation be crammed with pages of dense bullet points?  Now imagine you were presenting this to say 500 people or or even 1,000 presumably you would put even more effort into the design of the presentation?  Well these are types of number of people who might well be consuming your survey so why not put the same effort into the design of it as you would a PowerPoint presentation.

3. Have you written the press release yet?

 One of the best way of understanding what you really want to get out of the data generated from your survey is to write the press release summarizing its fantasy findings after you have drafted the survey. Its amazing when you start doing this what you focus on and what you leave out.  All off a sudden half the questions in your survey might start to seem irrelevant. Its a brilliant way of refining and editing back your survey.  This tips was given to my by one of my old bosses, Ivor Blight whilst working at Mirror Group newspapers and its been one of the most valuable pieces of survey design advice I have ever received.

4. Why are you asking that question? 

Is it because it will produce a nice looking answer or because it is actually generating useful actionable feedback?

Take a customer feedback study where you ask your customers to rate you product or service.  You find out that after polling 500 people that they score it a 4 out of 5.  Now tell me apart from feeling pleased what are you going to do with this information to improve your product or service?   What if instead you asked those 500 people to name ONE thing that might make your product or service better, how much more useful would that information be?

We also have a habit of making huge assumptions about about what a question will actually measure.  A classic example would be the purchase intent question, "would you buy this new product?"  This as I hope most of your reading this will be aware is proven to have little or no value as a predictor of sales.  A far more predictive question would be to ask them if they think they would buy the product instead of the the main brand they buy.

I would challenge in particular you to consider the value of those banks of questions that so often get asked in surveys that attempt to measure brand characteristics like.. how much do you agree or disagree with these statements about this product... "its a modern brand", "its a trust worthy brand", etc. What are are these types of question actually telling you? Are you trying to find out the driving reasons why people buy a particular brands? Well why in that case don't you simple ask people that question "why do you buy this brand"  . We did exactly this recently in an experiment to find out the driving factors behind why people purchase different brands of shampoo,  there were over 50 clear reasons cited why people choose a particular brand ranging from the smell through to the size of the bottle, the impact of advertising, the type of ingredients, the appeal of the packaging and how well each shampoo cleaned different types of hair, some people don't believe there are any differences in one shampoo to another so buy the cheapest  some people buy shampoo because it was recommended by their hairdresser,  others chose a brand because they liked the cream feel of the shampoo or the way it lathered up or they thought it was more ethical, but only 3 people out of a sample of 500 said they chose their brand because the felt it was modern that is 0.6%.

5. Have you tested the survey?

And I don’t mean for routing errors and spelling mistakes. I mean have 30 people done your survey and have you had a really good look at the data to see what it is delivering and how useful it is and  what its missing and what could be improved?  So few people in my experience properly pilot their research studies or use piloting as a means to develop and improve their survey and yet this in my opinion is the single most effective way of improving your market research.  

Thursday, 14 February 2013

Where can we inject more creativity into survey design



Here are my thoughts on some examples of the areas where I feel we need to inject a bit more creativity into the design of surveys. This content is taken from one of my presentations on the topic.

Monday, 4 February 2013

A guide to writing open ended feedback questions





There are various goals to an open ended question but in most cases it is not about the volume of feedback but about the quality. Whether you are trying to get respondents to be analytical, creative or spontaneous, the biggest challenge you face is encouraging people to think and think in the right sort of ways.

The average respondent spend 15 seconds answering the average open ended question, and you get on average of 5 words. Five words might be enough if it constitutes meaningful feedback, but often it is humdrum verbiage, a bit of a nightmare to analyse. If you were to ask people to watch an ad and write down what they thought of it, the most common response would be "it was OK" . The second and third most common remarks would be, "I liked it" and "I didn't like it" , the 4th would be "I don't know" . These responses are clearly not a great deal of value, so that you might as well have asked a yes no question, “did you like it or not”.

This is a guide to show you how to write more effective open ended questions to improve the quality of feedback and to encourage respondents to:

- Put more thought into their answers
- Be more creative
- Be more analytical
- Be more reductive
- Be more free form in their thinking

Wednesday, 19 December 2012

20 new buzz words for 2013

I have had an amazing opportunity to attend market research conferences around the world over the last year and so been exposed to all sorts of fantastic ideas and innovations and some brilliant thinking.  Stuck on a plane for 5 hours last week with not much to do, I though I would try and condense the best of the best of this thinking and use it to try and identify some research trends of the future.  Here is what I have come up with. 20 new buzz words for 2013...

Qualitic analysis: You heard it here first . Qualitic analysis is where you use qualitative research methods to help analyse and process large scale volumes of open ended feedback.  Text analytics software is powerful but largely stupid and so letting humans train these systems through qualitative analysis is the way forward.  With the merging of social media and traditional research I think this sort of hybrid approach is going to have a big future.

Lifestyle mapping : with geographical tracking now readily available via mobile phones I believe we are going to layered on top of maps a whole lot more information about people's lifestyles and activities and when we start wearing glasses (https://plus.google.com/+projectglass/posts) which act as mini computer screens which I believe is pretty inevitable all this information is gonna get a whole lot more valuable.

Social influence tracking: Mark Earls has be evangelizing how many of our decisions are influenced by the crowd, but who is taking account of this in their every day brand tracking research. I think tracking of social influence it going to become an important benchmark measure for certain categories of products in the future.

Tribal research: Linked to this is looking at people as tribes and the people who buy certain types of brands as tribes of consumers. There is some very interesting pioneering work being conducted by the University of Bath School of management that is worth checking out. This book is worth a read on this topic.

Bonzia research: there is a global trend toward working out how to make shorter more efficient surveys with mobile phones set to become a primary channel through which research is conducted pressures to do this will only increase. Bonzia research is the art of designing small but perfectly formed surveys

Habit specialist: understanding how we get into habits how we break them and how we set up new ones is a really valuable piece of knowledge for market researchers trying to understand how to influence behaviour. Everyone should read this book i can see it being a hot topics in the next year.  The Power of Habit 

Research data banks: We are generating shed loads of personal data that is clearly potentially very valuable. Right now Google and Facebook and Twitter etc think it's theirs, its not its ours!  I envisage a consumer revolution over the ownership of all this information in the future.  The furore caused by Instagram trying to claim ownership of the picture we upload is a case in point to illustrate the power of mass consumer revolt, causing them to instantly back track on their plans.   Someone someday is going to create a bank for it all and turn it into a personal asset that we and only we control and have the marketing rights to. You can have my purchase history if you like I am selling it to you for £10!

Implicit research:  The mutli award winning work by Cog research showing how the speed of association between words and brands can help understand the underlying personality of brands is I think going to turn this into a must have technique for research tracking studies in the future.

Things research: all sorts of things now have technology embedded into them from cars to fridges to billboard. I see new research companies will emerge that specialise in turning these into to data gathering research tools.

Now research: OK we currently have some pretty quick turnaround research products out their that can give you responses in hours. The demand for this is growing and could fuel new breeds of research companies that offer instant real time research.

Social research networks: I see the potential for a next generation group of micro social pollsters and aggregators who keep abreast of what their friends think and report back.  A step up from mroc.

Smart intercept research: as the quality of research that can be conducted on tablets improves (watch this space next year) I believe we are going to find these anchored all over the place to gather consumer opinion related to the experience people are having, be it in the queue to the bank in the changing rooms of shops handed around on train and planes, in hotel receptions, at the exit of every Mcdonalds. The future will be made up of an increasing amount of intercept research with a menu of research studies that people have the option to do based upon their experiences.

Segmenting by decision making processes: Watch this presentation delivered at the recent New MR festival by Elina Halonen I think she is onto something! Understanding how we "like to think" and how it effects our decisions is the hot new area of behavioral economics for next year.

Organic respondent generated research : Instead of writing a survey you simply pose a question and let respondents work together to answer it.

Consumer media: newspapers radio already heavily rely on consumers to shape content. Next stem handing it all over to them completely a newspaper entirely written and produced by the crowd, a radio station who's entire content is crowd driven.  Researchers could have a critical role in helping to facilitate this.

Consumer products : likewise we have seen the emergence of co-created products in recent years but again the role of the consumer has in the main been that of a  bit part, the last decisions being left in the hands of the marketeer. I see consumers taking over and completely ruining things in the future. Why not have consumer run bank where they vote on the fees they charge the profit they make co-create their own advertising promote the bank themselves. Instead of passive shareholders we have active ones who contribution is rewarded buy partial/micro ownership. Why not set up 2 banks like this at the same time and let them compete with each other.  I am not sure if this is research or marketing but surely its an opportunity.

Prediction research: has not the us election proved to us the power of predictive markets once and for all? Why are we not all doing more of this.  I think we will all be in the future.

Social brains: looking at the collective thinking of social networking and treating them like a giant brain. Now I am not talking about large scale semantic analysis. More mass ethnography mapping the patterns of thinking expressed in social

Survey games :  Obviously this is a specific area of interest for me, surveys that cross the divide between entertainment and market research that can sit and be positively recieved within the social media space - watch this space

(ok this is only 17 as someone has pointed out!  - any suggestions to fill the last 3 spaces?)

Tuesday, 6 November 2012

The 4 killer stats from the ESOMAR 3D conference

I was only able to attend one day of this conference, for me without doubt this is the most useful research conference of the year and so I am sorry, I am only able to give you half the story, but  here is what I brought back with me, 4 interesting stats, 3 new buzzword and 1 stray fact about weather forecasting.

350 out of 36,000: This is how many useful comments Porsche manage to pick out from analysing 36,000 social media comments about their cars. So the cost benefit analysis of this runs a bit short and this was probably the headline news for me from the ESOMAR 3D conference: No existing piece of text analytics technology seems to be capable of intelligently process up this feedback. Every single one of these comments had to be read and coded manually I was shocked. I thought we were swimming in text analytics technology, but apparently most of the existing tools fall short of the real needs to market researcher right now (I spot one big fat opportunity!).

240 hours: This was the amount of time spent again conducting manual free text analysis by IPSOS OTX to process data from 1,000 Facebook users for one project (and from this they felt they had really only scratched the surface). As Michael Rodenburgh from IPSOS OTX put it "holly crap they know everything about us".  There are, he estimated, 50 million pieces of data associated with these 1,000 uses that it is possible to access, if the end user gives you a one click permission in a survey. He outlined the nightmare it was to deal with the data that is generated from Facebook just to decipher it is a task in itself and none of the existing data analytics tools we have right like SPSS now are capable of even reading it. There was lots of excellent insights in this presentation which I think deservedly won best paper. 

0.18: This is the correlation between aided awareness of a brand & purchase activity measured in some research conducted by Jannie Hofmyer and Alice Louw from TNS i.e. there is none. So the question is why do we bother asking this question in a survey? Far better just to ask top of mind brand awareness  - this correlates apparently at a much more respectable 0.56. We are stuffing our survey full of questions like these that don't correlate with any measurable behaviour.   This was the key message from a very insightful  presentation. They were able to demonstrate this by comparing survey responses to real shopping activity by the same individuals. We are also not taking enough care to ask a tailor made set of questions to each respondent, that gleans the most relevant information from each one of them. A buyer and a non buyer of a product in effect need to do 2 completely different surveys. Jannie senses that the long dull online surveys we create are now are akin to fax machines and will be obsolete in a few years time. Micro surveys are the future, especially when you think about the transition to mobile research. So we need to get the scalpel out now and start working out how to optimise every question for every respondent.

50%: The average variation between the claimed online readership of various dutch newspapers as publish by their industry jic and the readership levels measured from behavioural measurement using pc and mobile activity in tracking as conducted by Peit Hein van Dam from Wakoopa. There was such a big difference he went to great lengths to try and clean and weight the behavioural measurement to account for the demographic skew of his panel, but found this did not bring the data any closer the the industry data but in fact further away. Having worked in media research for several years I am well aware of the politics of industry readership measurement processes, so I am not surprised how "out" this data was and I know which set of figures I would use. He pointed out that cookie based tracking techniques in particular are really falling short of delivering any kind of sensible media measurement of web traffic. He cited the "unique visitors" statistics published for one Dutch newspaper website and pointed out that it was larger than the entire population of the Netherlands.

Note: Forgive me if I got any of these figures wrong - many of them were mentioned in passing and so I did not write all of them down at the time - so I am open to any corrections and clarifications if I have made some mistakes.

3 New buzzwords

Smart Ads: the next generation of online advertising with literally 1000's of variant components that are adapted to the individual end user.

Biotic Design: A technique pioneered by Yahoo that uses computer modelling to predict the stand out and noticeability of content on a web page. It is used to test out advertising and page design and we were show how close to real eye tracking results this method could be. We were not told the magic behind the black box technique but looked good to me!

Tweetvertising: Using tweets to promote things (sister of textervising)

One stray fact about weather forecasting

Predicting the weather: We were told by one of the presenters that although we have super computers and all the advances delivered by the sophisticated algorithms of the Monte Carlo method, still if you want to predict what the weather is going to be like tomorrow the most statistically reliable method is to look what the weather is like today, compare it to how it was yesterday and then draw a straight line extrapolation! I also heard that 10 human being asked to guess what the weather will be like, operating as a wisdom of the crowns team, could consistently out performed a super computer's weather prediction when programmed with the 8 previous days of weather activity. Both of these "facts" may well be popular urban myths, so I do apologise if I might be passing on tittle tattle, but do feel free to socially extend them out to everyone you know to ensure they become properly enshrined in our collective consciousness as facts!

Sunday, 4 November 2012

Big data and the home chemistry set


Are we all Dodos?   I heard a couple of people tell us at the ESOMAR 3D conference that we are perilously close to extinction,  that we market researchers are dodos. In fact this has been a bit of a common theme at many of conference I have attended in the last few years a prediction of the terminal decline of research as we know it. The message is that our industry is gonna be hit by a bus with the growth of social media and the big boys like Google and Facebook and IBM muscling in to our space. We are also in many parts of the world facing tough economic times and tightening budget.

Yet despite all this it appeared that this was the best attended 3d conference ever, and it's not just this isolated conference either. I have been going to research conferences all around the world over the last year and they all seem to be seeing growing numbers of attendees and all I can sense from these conferences and particularly at this event, is an industry brimming with confidence and ideas.

So are we all putting on a brave face? Are we naively sleep walking into the future?   I don't think so...

Thursday, 18 October 2012

The Future of Market Research

What do you get if you Google the future of market research? Well not a link to this blog post, as Dan Kvistbo @kvistbo noticed.   I am glad someone actually checked.

This post is part of an experiment to see how a single post gets tracked on google search and how easy it would be to find if you searched for it.

I will actually be doing an article about the future of market research which I will be writing shortly as part of a conference being organised by http://www.warc.com/ about the future of market research.

Monday, 17 September 2012

ESOMAR congress: the buzz

This is a summary of some of the buzz I picked up at the ESOMAR congress.

There were 3 dominant phrases I heard over and over again at this years ESOMAR congress: big data, social and story telling...

Big data: I estimate that nearly 50% of the presentations I sat through mentioned the term big data in one context or another. Taking over from the term "mobile research" which has held the number one slot of market research buzz words for the last 2 years. Despite this we did not exactly see many presentations demonstrating the execution of big data mostly its use came in the form of a warning sign to the industry that big data is about to engulf us all and change all the rules of engagement and encouraging new competitors in our market research space.

Social: One of the most prominent nouns used by market researchers at the ESOMAR congress, the word social seems to have become detached from the word media and has taken on a life of its own. It has now been attached to the word research and survey so we heard mention of a social survey - one that uses the language of the consumer.

Story telling: We were told over and over again that incites are not enough, as an industry we have to become better story tellers. We were also challenged to ask the right questions. We were told that agency planners are better story tellers and management consultants ask better questions and if we could do both of these things better we could "wop both their arses"

Behavioral economics the star of the show

Behavioral economics was undoubtedly the star of the show though. Papers exploiting the idea picked up best paper award from Tom Ewing @Brain Juicer, and best case study from Florian Bauer and for my vote best presentation from Kevin Kary @Affinova.

All 3 demonstrated the impact of thinking about the behavioral psychology of answering questions in survey and how rational or irrational it can be and if you can account for this you start to see a completely different picture in the data you are gathering.

Tom Ewing in his most eloquent style showed how turning off peoples rational decision making process allows them to measure the impact of more emotional decision making processes. Florian Buaer ground breaking pricing research demonstrating that unless you take into account the behavioral psychology of pricing when conducting price research you will under estimate how much you can push up prices - which I am slightly concerned that if the whole marketing industry cottons on to this it could trigger global hyper inflation. Affinova identified a big whole in existing concept development work, that when we evaluate choices we forget about whether or not we would purchase any of the products at all. By plugging this gap through a change in the way they asked the question they were able to far more accurately predict the the success of new concepts.

Other observation...

Constant connectivity/Welcome to the new normal: There were many observations made at the congress about the changing relationship between brands and consumers. With an abundance of easily accessible data and consumers taking over the brand message through social feedback mechanics we are moving from a push relationship with consumers where we spend millions feeding them information through advertising and branding to a pull relationship where consumers go out and get it on demand. This requires totally different thinking about how to position brands.

Customer-centricity: On the back of this, there was a lot of talk about placing the customer at the heart of decision making and we commonly heard phrases like customer centricity, customer facing, Empathetic relationship with customers, How we are engaging with customers every day of the conference. Clearly the market research industry has identified that the custom is now king, not to say that it has not always been, but now it is a nye on dictatorship!

Iteration/beta test norm: Consumers expect products now to evolve and expect this to happen rapidly. There was a lot of talk about this idea and how it is changing how we think about developing products and researching products, in a sense these two things are becoming merged. Consumers buying experiencing and reporting back their opinions on product are now part of the product development cycle. The mantra seems to be get your product out there and see if it flys, if not iterate.

Google: The no.1 brand on everyone’s lips this year was Google, perhaps because of the entry into the market research sector with a research offering, but also because they epitomise the big data players moving into the little data market place.

The rise and fall of prezzi:  Over the last year seen the rapid rise an rapid fall of the use of prezzi.  Prezzi dominated the last few conferences I went to but only a couple of uses at Congress. Perhaps we all got fed up of feeling sea sick?

A few new buzzwords:

This conference was bit light on new buzzwords but here are a few I picked up on:

Flawsome: this was the best one, mentioned by of Wendy Clark Coca Cola, flawsome means awesome with flaws. The idea that great should not get in the way of good, that consumers are getting used to beta testing product and should not let perfection hold you back, in fact a slightly flawed querky concept can give a brand more humanity.

Innernet: kids spending more time inside consuming the internet

Outernet: how the internet is now being used outside as part of our everyday lives 

Super abundancy: the prevalence of data and easy access to information in this digital world

Now: News is old hat! It frames the idea of information in the past tense. We don't want news any more we want to know what is happening NOW!!! The time delay between events happening and us as consumers finding out about them has gone done to zero. With Twitter and live streaming, news is dead, long live NOW.

Invent Forward: A "reinvention" of the word reinvent

Phrases I heard used to describe our function as market researchers:

Insight intrapreneurs: mentioned 3 times

Agents of change: being the agent of change was a common call to action

Business story tellers: we don't deal insights any more we have to tell stories

Data synthesizers: the future of market research in the world of big data

Friday, 6 July 2012

A clutch of new Buzzwords


Here are some new buzz words and interesting phrases that I have collected recently that I think market researchers might be interested in.

Intrapreneurs: the entrepreneurs that instead of setting up and running their own business, work within larger businesses or organisations and drive entrepreneurial activity within these organisations. Source: Maryan Broadbent, David Smith & Adam Riley ESOMAR Asia 2012

Linguistic anthropology: Social media data mining is leading to a new bread of research focusing on understanding the detailed use of language and the processes of human communications, variation in language across time and space, the social uses of language, and the relationship between language and culture.

SoLoMo: social-local-mobile. A word made for market researchers lips that combined the hot 3 topics of social, local geographical targeting and mobile. Source: http://mashable.com/2012/01/12/solomo-hyperlocal-search/

Micro-multinationals: A new breed of entrepreneurs creating “micro-multinationals”, organizations that are global from day one. Source: Amit Gupta & Terry Sweeney ESOMAR Asia 2012

Social looping: Connecting and taking control of your disparate set of social network connections and connection channels. Source: marketing age http://sedatedworld.com/?p=947

Personal branding: The idea that people now are thinking about themselves as brands. Source: various  (Elina Halonen rightly pointed out that this is not exactly a new buzzword, but all I would say is that I have heard it being used quite a lot at the moment!)

Crowdfunding: The new trend for social crowd backed business ventures  e.g.  http://www.crowdfunder.co.uk/,  http://www.crowdcube.com/

Global villager: The globe has been connected into a village by digital technology - an idea originally presented by Marshall McLuhan, popularized in his books The Gutenberg Galaxy: The Making of Typographic Man (1962) but really only realised since the advent of the web. So if you are hooking up on twitter with people in another continent you are one of the global villagers. Source: Maryan Broadbent, David Smith & Adam Riley at the ESOMAR Asia 2012

Research Improv: Using some of the theatrical techniques of improvisation in focus groups or workshops to develop and explore ideas.Source: Lee Ryan: http://appliedimprov.ning.com/profile/LeeRyan

Kinesthetic research: Kinesthetic learning is a style of teaching where pupils carry out a physical activity, rather than listening to a lecture or watching a demonstration. Kinesthetic research is where we conduct research through a physical activity or immersive activity and is tipped to be a growing area of research innovation.   Research improv is a branch of  Kinesthetic research,  clients participation in co-creation exercises with end users is another example and  so too is I suspect next example Socialized research which I spotted as a topic at the forthcoming ESOMAR congress.

Socialized Research:  This is the title of what looks like it might be a hot ticket presentation at this years ESOMAR congress by OTX Ipsos Open Thinking Exchange "a brave new world of immersion, augmented reality, geo-location, co-creation…" the addition of a little “social” into everything we do so that consumers are engaged in ways that capitalize on and mimic their expectations given the realities of today’s new world. Welcome to the new normal. Are you ready?

Decision making science: We started with psychology, this branched off into social psychology then behavioural science and got refined into behavioural economics now we have a new one decision making science. A nice all explaining concept. Source: http://www.research-live.com/features/measuring-emotion/applying-the-science-of-decision-making-to-marketing/4007689.article

Creative leaders: people in organisations to act as grit to drive innovation. Source: Maryan Broadbent, David Smith & Adam Riley at the ESOMAR Asia Conference
Social graph: the global mapping of everybody and how they're related.  Source: Brad Fitzpatrick http://bradfitz.com/social-graph-problem/

Being the wide angle lens: The person in an organisation who offers a more panoramic viewpoint on a business. Source: Maryan Broadbent, David Smith & Adam Riley at the ESOMAR Asia Conference

Chief customer: Person on persons who represent the embodiment of a customer in a business.  Source: Maryan Broadbent, David Smith & Adam Riley ESOMAR Asia 2012

Fremium: This is a is a business model by which a product or service is provided free of charge, but a premium is charged for advanced features. Source: This term has be around long enough to grab itself a wikipedia entry  http://en.wikipedia.org/wiki/Freemium

Showroom retailing &  Monitor Shopping:  A shift to retail spaces like electronic and book shops to become showrooms where people look at products and then order them online.  Monitor shopping is the process of going shopping online.   Source: various

Sharkonomics: Taking a shark like approach to battling with your competitor i.e. sneaking up behind them and aiming taking great big chunk out of their market share though some clever strategic move.  This seems to be the way that some of the big boys in the mobile and internet businesses seem to be operating now.  e.g. Microsoft launching a premium tablet. Source: title of book by Stefan Engeseth

Finally a couple of twitter specific buzzwords:
Trashtag: A hashtag that someone tries to establish for purely self-centred and/or commercial reasons, rather than to create a strand of content that might actually be useful or interesting to someone else.
Twitchunt: torrents of me-too sentiment on twitter gathering mass and momentum very quickly.
Obsoltweet: a tweet that has missed the boat
source: http://www.abccopywriting.com/blog/2012/05/10/12-new-twitter-buzzwords/


Tuesday, 3 July 2012

How to calculate the length of a survey

As an industry we tend to use survey length as the cornerstone for how we price surveys, but often the estimated lengths and real lengths of surveys can turn out to be wildly different. Leading as I have experience to potential conflict.

The reason is we have not established in the research industry a common and reliable way of estimating the length of a survey.  The most common method in circulation is to assume we answer surveys at 2.5 questions per minute but this technique is fatally flawed.  This is because question themselves can vary wildly in length e.g. a survey with 10 grid question with say 50 options may take 50 times longer to answer than a survey of 10 simple yes no questions.

So I have been on a bit of quest to work out some slightly more accurate ways of do this. As a result of some recent work we have been doing to examine in detail how long respondents answer survey I have come up with 3 new alternative methods I would like to put forward to more accurately calculate the length of a survey. 

I hope they may be of use to some of you.

Method 1: Survey length = (W/5 + Q*5 + (D-Q)*2 + T*15)/60


This is the most accurate way of doing it (though I recognise it take a quite a bit of work). This formula will given you the length of an English language survey in minutes.

W = word count: Do a word count of the total length of questionnaire (questions, instructions and options). An easy way to do this is to cut and paste the survey into word but don't forget to remove any coding instructions first and it will tell you the word count. Respondents read English in western markets at an average rate of 5 words per second.

Q = Number of Questions: Count how many questions the average respondent has to answer. Allow 4 seconds per question general thinking time and 1 second navigation time* (assuming 1 question per page).
*this may vary depending on survey platform if it takes longer than 1 second to load each page adjust accordingly

D = Total number of decisions respondents have to make: Count in total how many decisions the average respondent makes in total using this guide below and allow then 2 seconds per decision.

Single choice question = 1 decision
Multi-choice question = 0.5 of a decision per option
Grids = 1 decision per row

T = Open ended text questions: Count how many open ended text feedback questions a respondents has to answer and allow 15 seconds per question. (note this may vary quite dramatically based on the content of the question but on average people dedicate 15 seconds to answering and open ended question).

Method 2: Survey length = (W/5 + R*1.8)/60

If you want slightly simpler approach use this formula which is not quite so reliable but will get you close...

W= word count
R = total number of row options: Note this is just rows and not columns on a grid. This can be quite easily done by cutting and pasting your survey into excel and then in a side column mark up all the rows and then sort.


Method 3: W/150

If you don't have enough time to add up all the number of questions and row options this is another quick a dirty method (though I would not vouch for it being much more acurate than the 2.5 questions per minute approach).

This will give you a rough estimate of the length of a survey in minutes.  It is no where near as acurate as the above 2 more detailed methods but it will be someone in the correct ball park. Careful though if  you spot your a dealing with a particularly verbose questionnaire.

A wisdom of the crowd approach I would recommend would be to use both the 2.5 question and W/150 methods and compare the differences - if they produce just about similar figures well go with that, if they generate big differences it might be worth adopting method 1 to do it properly.


Where all these formula will fall over?


1. If all the respondents don't see all the questions: Skip logic can mean not everyone sees every question in a survey which means it can be hard to work out the average number of question respondents will have to answer which you will need to know to accurately work out the average survey completion time. Most errors in estimating survey length centre around this issue. There is often no easy way of doing this other than manually working it out using a spreadsheet.

2. Not properly taking into account question loops. This another issue that leads to people miscalculating the length of a survey. If for example there is a loop of question that you ask for a set of brands people often forget to include the extra time it take to answer these question and only count one loop.

3. If you are working out the length of a survey not conducted in English: or where English is not the primary language (India for example) you will need to weight for longer reading, comprehension, consideration and survey loading times in different countries. Below is a rough weighting guide if you are working from a translated version of an English survey, (sorry that I don't have time weighting data from every country):

            Length   Weighting
Japanese/Korean 0.95
Netherlands 1.00
Germany 1.05
French 1.06
Spain 1.09
Scandinavia 1.10
Italy 1.11
Chinese 1.13
India 1.34
Eastern Europe 1.35
Russia 1.37
Latin America 1.43

4. If there are a lot of images in the survey:  you will need to allow for extra loading times. Allow between 2-10 seconds per mb.

5. If you are including a lot of  non-standard question formats in the survey e.g. sorting and drop down style question take longer to answer.

6. If you are boring people to death with a long highly repetitive survey!  Respondents will start speeding when they get bored and so average decision making time can drop.

Do you have any thoughts?


Now I would love to hear from anyone who has some thoughts on this or have come up with what they think is a more effective means of doing this. My ultimate aim is to find an agreed means for the who industry to adopt to use as a more effective trading currency when pricing surveys.