Friday, 2 November 2018

Is shy voting going to be an issues impacting on the 2018 US midterm polls?

Voters being a little shy about declaring who they will vote for is one of the big challenges for polling companies in certain elections around the world and it has lead to some very notorious polling errors in recent history.

The level of shy voting in any one election varies dramatically based on the character of the parties and candidates contesting the vote.  It tends be much more prevalent in election where extreme left or right wing/nationalist parties or candidates are in contention.  In Poland for example after the fall of communism people were shy to admit that they were still planning on voting communist which lead to some big polling errors in under representing the scale of the communist vote. Similarly in France, many people are have been historically shy about declaring their votes for the Front National which lead to a major polling upset in the 2002 presidential election where the Maria Le Penn's vote share what more than 6% under represented, enough for here to move from 4th to 2nd place in the first round voting which put her through to the second round run offs.

Accounting for shy voting is extremely difficult, as shy voters have a tendency of opting out of taking part in polls which make it difficult to directly measure it.  I have first had experience of this from polling we conducted during the 2015 US election. During the high of the Trump "pussygate" scandal we didn't see any noticeable drop in support for Trump among the Republican voters who participated in our surveys, but what we did observe though was a drop in around 3% the number of declared Republicans participating in our survey compared to the previous waves. This all changed when the FBI announced they were investigating  Clinton a week or two later, the number of Republicans participating the next wave or our survey suddenly leaped by 6%.  When we weighting our sample to be balanced it out by party representation as we did this differences was invisible and we only spotted this in hindsight analyzing our unweighted responses.

Shy voting issues are lot more prevalent in face to face and human based phone polls compared to the more anonymous format of the online survey.   This is something that the Kantar public team have studies in some detail. In the 2012 French election for example there was a 4% difference in the Front National vote share between phone/face to face and online.


This perhaps explains one of the reasons why the State wide polls, which are nearly all phone based had much larger errors in the 2015 US election compared to the national polls which tended to be more online based.  There were clearly other factors as well but without doubt there were some people who were shy about declaring they would be voting for Trump.  You can almost measure the scale of it by looking at the size of the systematic errors in the polls after weighting out some of the other issues that were a factor such as education bias. By my rough calculations shy voting factors accounted for upwards of 2% under representation in the eventual vote count for Trump in some of the State level polls.
   
This bring me to the Mid-term state election polling in the US.  Is shy voting likely to be an issue this round too?  I am afraid to say it is looking like there could well be.  There have been some learning since the big state wide polling errors in the 2015 but some elements of shy voting error is almost impossible to eradicate for one of polls like these. 

At this stage its extremely difficult to measure shy voting directly but one indicator is to study the level of undecided voters in each election district.  One of the opt outs for a shy voter is talking to a phone pollster simply say you don't know who you are going to vote for.

I have done some analysis of the undecided polling data from 44 of the mid term elections as published in the New York times https://www.nytimes.com/interactive/2018/upshot/elections-polls.html.   What I have examined is the different levels of undecided voters when either a Republican or Democrat candidate currently holds a lead in the current poll and compared it based on the relative size of the lead and what I am observing are some differences.



Now it is very difficult to definitively explain these difference, it could be simply random error variance, but its a basic piece of human psychology that voters tend to be more shy of their views about voting for a more controversial candidate when they feel they might be in a minority.  When a candidate start to get even slight lead people start to feel more comfortable declaring that they will also vote for them too. There is also some types of wavering voter who don't want to be seen backing the losing candidate. So you tend to get higher recorded levels of shy voting when a candidate that is subject to shy voter factors is slightly behind than when they are in front in close races. When one or the other candidate has a clear lead shy voting tends to be less prevalent as that means that more of wavering middle have made up their clear mind.

What the results of this analysis shows is that where the democrats hold a lead, there are slightly more undecided voters compared to where the Republicans hold a lead, and the closer the race the bigger the difference the gap is around 1%.  This suggest to me that there could well be an element of shy voting caused by negative image of Trump and some elements of the Republican party that is voice by the media that might impact on the accuracy of some of these mid term election polls and with nearly 20/44 of the elections I examined being near dead heat races, we might have to prepare ourselves for some polling miss calls.

Now I have to stress this is very much my personal conjecture about what this this data is telling me, I am very much open to hear other peoples interpretations.

...and if this does happen I would ask you not to leap to blame the polling companies however, as ahead of time it is almost impossible to methodologically factor in for shy voting issues like these at state level polling.




Thursday, 7 December 2017

England are going to win the World Cup in 2018: A Market Researchers prediction!

This is an blog post I first published after the 2014 world cup, that I am republishing to herald our expected victory next year! 



IN 2014 England got dumped out in the first round of the World Cup and everyone in our country was disappointed, an emotion we are quite used to feeling. After every world cup failure begins a round of postmortems that we all probably secretly enjoy as much as the competition itself, working out who to blame for the team’s failure.  In past World Cups this has been quite easy: for example David Beckham kicking a player and getting sent off, having a turnip head as a manager or a lack of goal line technology.  But in 2014 it was a little difficult to work out who was to blame. I read a lot of overfit analysis, none of which is particularly convincing because, well, in the scheme of things the sparky young team played quite well. It seems like we were just a bit unlucky last time round.

The role of randomness

Its quite hard to accept the role that randomness plays in the outcome of world cup matches.  Every nation when they get kicked out or fail to even qualify probably believes their teams were "unlucky" and that their teams are better than they actually are.  So what is the relative importance of  luck v skill when it come to winning the world cup?

Unlike the premiership where there are 38 games over which time the performance of the teams is largely correlated to the quality of the squads (take a read of the Number Game* by Chris Anderson and David Sally)  performance of a world cup squad cannot be calculated by the aggregated skill value of the squad there is a lot more randomness involved.  Imagine if the premiership only lasted 3 games: in two out of the last four seasons the team that won the premiership might have been relegated.

*a must read if you are a market researcher and like football!

There is another factor too, in the premiership the best players get sucked up into the best teams hence the much higher win ratios between the top and bottom performing sides compared to the world cup where the best players are distributed more randomly and is proportional to the size of each footballing nation.  This in tern makes the outcome of international matches even more random.

Who influences the outcome of a match?

If you look at who has goal scoring influence across a team you will notice that the negative effects of causing goals a pretty well distributed across a team but the positive effects of scoring goals are a lot more clustered among some individuals. See chart below showing statistics from an imaginary team based on typical performance data taken from the premier league.
 

The potential performance of a world cup team must be measured not by the overall skill value of the team but the value of a smaller network of attacking based players who can make the most game changing contributions. In the case of players like Lionel Messi a single player can carry the whole goal scoring burden of a squad.  It only takes one or two randomly allocated star players in a world cup team to elevate its performance chances (think of Pele or Maradonna).

The performance of defence is more a case of luck. You might have one or two unreliable defenders who you may not want in your premier league squad because you know over the course of a season they may cost you a match or two, but at the individual game level and a world cup is based on the outcome of  three or four key individual games, the chances are a poor defender might well run their luck.   The other two important factors defenders have to contend with are the extra stress and lack of team playing experience of a world cup team compared to a premiership squad.  Without doubt stress plays a big part, players are really hyped up and there is probably an order of magnitude increase in tension which is the root cause of many errors in world cup matches. If you look at the defensive mistakes that cost us goals in recent world cups some of the biggest mistakes were caused by effectively our most reliable players, John Terry and Steven Gerrard and Phil Neville.  There is also a lack of formation practice to contend which is particularly critical for defence. How many hours of playing together does it take for a defence to gel? Most world cup squads have days rather than months to prepare.

A team like England might well have a higher aggregated skill performance average compared to other teams, but this does not result in the same reliable performance ratios that you see in the premiership. This is because over half the value is based on their defensive skill which can be completely undermined by bad luck and we don’t have a cluster of super-skilled players to elevate the team out of bad luck matches by scoring more goals than we let in.

The influence of the Ref

To win world cup matches you are much more reliant on the manager’s structural approach, the contributions from clusters of individuals who might form good attacking combinations and one other person – the REF!  Or rather, the ref in conjunction with the crowd and the linesmen.

If you analyse a typical game you will find that the number of major goal scoring decisions that are in the hands of the referee and linesmen are actually enormous compared to any individual player. It’s difficult to put a figure on it but let’s say on average there are about 6 decisions that could have affected a goal one way or another by the referee* its instantly obvious the relative influence they have on a match.

*That is a wisdom of the crowd estimate by asking a collection of football fans how many goal-affecting decisions are made in the match by the referee and linesmen, six was the median average estimate.


Now in nine times out of ten these decisions balance themselves out but refs are only human and so it’s no wonder why there is such a big home team advantage – with 50,000 fans screaming penalty it must be extremely difficult for refs not to be influenced by the crowd.  In fact you can almost put a figure on the influence of the crowd by comparing home and away goal scoring averages the home side gains an average 0.35 of a goal per game net advantage if you examine premiership games,  which can only be really down to the net contribution of the crowd/ref decision effects.

It’s no wonder as a result that there is such a disproportionate home nation advantage.  Effectively every home nation team is starting with a 0.35 goal lead, this advantage aggregated up over the course of a tournament  has means that nearly 30% of all world cups have been won by the home nation that is 10 times higher than chance.

Am I likely to ever see England win another world cup in my lifetime?

Is probably a question most England fans ask themselves. What does it take to win a world cup – how good do you have to be to override luck?  We have taken a look at this and run some calculations.

The chart below take a little explaining but it maps out a team’s skill level v the number of times it’s likely to win a world cup over the course of someone’s average football supporter’s lifetime of 72 years = 18 world cups.  If there are 32 teams in a word cup and you are an average team and your team qualifies for every world cup final the chances are you will win 1.1 world cups over your lifetime. If in you are England and only qualify roughly 80% of the time the changes will drop to 0.96.  If your team is twice as good as average, you are likely to win roughly 2 world cups and 4 times better 6 world cups.


 England have one one world cup, Germany three and Brazil five so does that mean we are average team and Germany are three times better than us and Brazil four times better than us?

Well essentially yes, if you look at the average game-win ratios of all the teams that have played the most regularly in World Cups v the number of World Cups they have won its pretty closely correlated at 0.91.    Germany has a three times higher win ratio than us and Brazil four times higher.


Now I appreciate there is some self-selection involved here – this chart should really be based on first round matches only for a totally fair comparison, but we don’t have that data. I think it’s reasonable to say though that England has not really been done out of its fair share of World Cups.  I think we have won as many as our teams aggregated performance deserves.  You might argue that some teams have been luckier than most: Italy certainly and others unlucky, Mexico should have won it twice by
now based on their aggregated performance.


A victory every 50 years


Doing the maths, based on England average win ratio, we should win a world cup roughly once every 50 years.  Now as England last won the world cup in 1966 so in 2018 it will be 52 years since we won, I think that mean we are now officially due a victory this time round does it not?


Wednesday, 9 November 2016

How does a polling company find out how many followers the anti-research party has?

A conundrum for you...

Imagine I have set up a completely new political party and in my manifesto I tell my followers not to trust the polls and to slam down the phone on any polling company that tried to call and not answer any surveys.

My party is now effectively invisible to researchers!

How does a polling company work out how many followers this new party has?

Could this phenomenon go to explain the massive miss read in the US election polls?

...Trump painted polling companies as the enemy, it is no wonder some of his followers as results might have refused to engage with them and as a result the polls end up with a hole in their numbers.

This is the conundrum market researchers have to face up to if they want to get to grips with political polling in the future.

We need to find a way of measuring the opinions of those that hide away from expressing their opinions.

My mum spotted a nice solution, at her local church fete during the Brexit campagn a stall was selling 2 varieties of rhubarb, "in" and "out" they sold 26 bundles on in and 28 bundles of out! Job done. 





Here is what just happened

I am writing this on the morning after the US election so I don't think I need to explain the headline....

I am sure there will be more sophisticated explanations emerge over the following day but this is my take to understand the result of the election.

In the run up to the US election out team have been conducting an ongoing series of political polling experiments to understand what has been going on. We have not been polling in a conventional sense, more experimenting with how to measure voting sentiments and voting intention in an attempt to find a better way of predicting the outcome of tight elections.

We have been asking a lot more indirect question about the difficulties people were having in making up their mind, exploring more implicit measures of voter sentiment to see how they stack up with declared measures, getting people to play games to predict the outcome to reveal some of their hidden feelings and we have also focused a lot of attention on asking open ended questions to measure the level of passion in the arguments and look at what reasons people have been using to make their choices. 

Here is what I think has just happened from my perspective from the learnings from all this research.

Trump messaging was far stronger than Hillarys from the get go. It was more coherent. Make American great again. Close the borders. Drain the swamp, Hillary Clinton is corrupt.  We were seeing all this being echoed back over and over again to us in the explanations gave us for why people wanted to vote for him. 

On the other hand Clintons messaging was, extremely weak, in fact almost none of it seemed to stick. Less than 10% I estimate of the reasons cited for voting for Clinton were anything to do with liking her policies, it was nearly all to do with stopping Trump winning. So many people caveated their choice with an explanation that they were picking the lesser of 2 evils.

All the implicit candidate favourability measures we undertook showed us how much Clinton grated on the American public. Flashing pictures of her face solicited up to a -60% negative reaction, even worse that Trump who is known as being a pantomime villain.  Here face and perhaps more importantly her voice did not fit. The majority of American public implicitly did not warm to her. 

What Trump was up against in this campaign was not Clinton, but Trump himself and, well let's not beat about the bush here, his personal sociopathic character traits.

The Trump sexual harassment scandal embodied all this and the misgivings so many people were having about handing him power seriously pegged back Trumps latent momentum in the final month of the campaign as all this news broke out. 

But beneath all this, he was a lot of people's implicit preference,  but they could not express that in opinion polls or even to themselves to  that matter due to the outrage being voiced in the media about his behaviour, but his core message resonated.

Then comes along the FBI email investigation a week or so before the election. This was like pulling out literally a “Trump” card.  What is did was give all the people who latently liked his messaging but were suffering from cognitive dissonance over his character, a strong emotional counter argument for prefering him it gave them something that they could dress up as a lot more significant, it validated all his messaging about his opponent too. It could not have been more perfectly timed.

We  actually saw the change happening in real time as we conducted a large scale research experiment the week before the FBI press release so on the Monday after this we did a follow up piece of research to see what was happening and  Clinton's 7 point lead we had seen the week before literally evaporated overnight. 

The shy Trump supporters were released from the closet so to speak and at the same time all the people who didn't like Trump or Clinton were given a strong reason not to vote for either candidate or stay at home and not vote at all. 

The last minute FBI volte-face was far too late in the day to undo any of this.

What we have learnt from all this that the public's opinion on how they will vote is very emotional process,  similar to the way my daughter makes up her mind about what type of curry she wants to order in our local curry house. She tries on several  choices to see how she feel about them thinks she has made up her mind, changes it, changes it again and then ditches them at the last minute when the waiter is standing over her and decides with 
her gut instinct.

In this case the US gut fancied trying something new, despite some serious misgivings  because the other dish did not seem too appetizing at all they decided to roll the dice.

You can chide the pollsters if you like, but this type of emotionally charged election is almost impossible to predict even a day or so out but certainly its clear you cannot predict an election by simply asking people which way they intend to vote.  


Here are a few other closing thought about why the polls got it wrong....

1. Are the types of people that slam down the phone if a pollster calls and would never think about doing an online survey also the same type of people who might be more likely to have voted for Trump?

3. Clearly Hillary Clinton did not motivate here democratic base to vote in the same way Trump rallied his supporters and so the polls which are often weighted by past voting activity were delivering a miss read

2. Likewise did weighting polls by traditional political allegiances have any relevance in this election

4. Male blue collar workers who voted for Trump in droves are the hardest group to reach with research as they are working

5. There is some evidence in our research that Trump supporters were slower to respond to our online poll invitations and so some short turn around polls might have closed up before all the Trump supports had a chance to register their opinion.


Monday, 17 October 2016

My name is Jon Puleston and I am addicted to information

From the moment I get up in the morning to the last thing at night I am immersed in information gathering.



News was something I used to read once a day. Ever since having a smart phone my propensity to consume news has slowly increased month by month - and with the ever increasing proliferation of news aggregation apps, it’s becoming something I dip into almost every spare moment during the day.   The first thing I do when I wake up in the morning and last thing I do at night before switching off my phone is check the “news” .  It had become a total addiction.  In addition to news there is social media, which I consume with equal levels of hunger, be it Facebook or Twitter or Linkedin.  When I run out of new information from these sources to consume, I switch to doing things like looking through pictures on Instagram or virtual shopping on ebay.

During the day I find myself checking out “the news” every time I have a break, make a coffee, go to grab lunch, go to the toilet or in the middle of a dull meeting.  Whilst watching TV I find my hand reaching out so often to my smart phone to double screen consume information, whilst I listen to the radio news while eating breakfast I am scanning the same news on my mobile.    When I stop at a traffic light I stifle the instict to grab my phone.  While I walk down the street anywhere my head is down foraging like a hungry pig for news truffles.

The way I am using news information is changing too. I find myself getting so emotionally involved, in the same way perhaps as you get hooked into soap operas if you watch them every day.

I find I am far more drawn into politics than I have been in the past.  The campaigning and social nature of information content is so much more significant than it once was.

Politics for me is replacing soap operas literally.

The major news stories like the Scottish referendum,  the UK election, the EU referendum, the US election have all completely hooked me in. I have literally mainlined news content from these political stories.

…and how I am using and processing this information goes way beyond objective information gathering.  What I am looking for so often are arguments and evidence that support my point of view.  I am highly practiced in SCHADENFREUDE, delighting in reading about the downfall of political foes, people with opposite opinions having their arguments skilfully met.

The more news information I consume the broader the range of topics I need to consume to feed my habit…

Take sport for example, I don’t watch much football these days, but I do consume vast amounts of football information.  I track the action of most of the matches on a Saturday afternoon via news updates that ping me whenever certain teams score.  I listen to football pundits on the radio talking about football for hours a week.  I ferociously consume transfer news gossip and read numerous sporting blogs.

For all the tut-tutting I am probably drawn into as much salacious celebrity gossip as your average teenager - but at the same time, and paradoxically, I’m equally drawn into hard core science information – some in depth article about a breakthrough in quantum physics or some Kardashian  family antics are all hoovered up into my brain with little differentiation, hanging around for a short while before fading into the soup of other meaningless information I have consumed.

Searching for distractions

I find myself getting more increasingly distracted too, doing all those stupid mind challenges they post on Facebook, I am like a moth drawn to them like a light.  I have always been an obsessional problem solver and so struggle to ever pass a problem without wanting to think about it.

Entertain me, feed me with news, tell me something I didn’t know, titillate me, arouse me, impress me, make me feel happy, make me feel outraged, tell me a secret, fill the void, surprise me, confirm to me I am normal, summarise the complicated, provide me with a cheat sheet to life, wow me, take me out of the now to a more interesting place, feed me with information about new technology, pander to my political sensibilities, align with my point of view, tell me something bad about Donald trump, help me to feel any emotion, I don’t really care which one it is.

Let me be clear this is a serious addiction

Akin to smoking cigarettes and taking heroin you chase the dragon of dopamine hits, but the more you do it the less you get out of it. Perversely this forces you to consume ever larger quantities of it to the point that you realise that it’s taking over your thinking patterns and disrupting your life.

And its not just me who’s hooked,  if you are reading this you are likely to be hooked too!

It’s a global epidemic.

Snow my partner came across this New York times article recently and sent it too me, its long but worth a read  http://nymag.com/selectall/2016/09/andrew-sullivan-technology-almost-killed-me.html

The comparison I make is with alcohol consumption in the middle ages. When beer started to become freely available, with the development of brewery skills everyone drank beer all day long without constraint, even young children drank it and it and took a few centuries to understand the negative impact it could have on our lives and to establish the social rules for its consumption.

We are at similar point in history I think with information consumption on our mobile phones. It is rather getting out of control. We are all trying to fight it I am sure, like for example banning phones from meal times and establishing social rules for when it’s acceptable to whip out your phone, but I think there is still a long way to go.

Investigating information addiction.

With this background I decided to do some research to examine how are we are processing this sea of information we are consuming nowadays.

What type of information is cutting through and why?
How do brands actually cut though and compete?

If you are interested to find out more about this subject come to the MRS Customers Exposed event on Thursday 27th when I will be exploring this topic and revealing some of the results of this research.

https://www.mrs.org.uk/event/conferences/customers_exposed_2016/course/4742/id/11742


Mea Culpa

As Mark one of my best friends politely reminds me sometimes.  I am both observing the problems but have become part of the problem too. The information I am delivering is rather adding and abetting to process which is on my conscious.

If you feel like you are suffering from this addiction and looking to make steps to deal with it here is a website I found useful.

http://www.psychguides.com/guides/treatment-for-addiction-to-smartphones/

Monday, 13 June 2016

Why do we rate everything 7? I blame teachers.

Have you ever thought it odd why when we score thing we have a tendency to disproportionately use the top end of the scale?  When you ask people to rate something on a scale of 1 to 10 the average person in most countries tend to score things 7.

 Now if we lived in a logical world you would have thought average score when we rate things on a scale of 1 to 10 should be 5.

So why do we over score average?  Well forget acquiescence bias, I have decided to blame it on our teachers!

For most of us the first exposure we get to scoring is at school, when we get our work marked.  I am sure nearly everyone can remember those anxious feelings as the teacher handed out the homework in class and you opened your homework book to see what mark you got.

In British schools we tended to get marks out of 10.   If I got 8, 9 or 10 I was happy, that was what I was aiming for.  A mark of 6 and below was a disaster as far as I was concerned and 7 seemed to be what you got most of the time. 7 was what I perceived  as average.

7 being use as average is irrational of course.  Children across the spectrum of a class should be as likely to get a score below 5 as above 5 if the process of marking was done totally rationally.  

Yet when I think back I am not sure in my whole early school life did I ever get a mark less than 5  even in English and god have you seen my spelling!

The whole processes gets corrupted by the natural eagerness of of teachers to encourage us , so everything gets shifted up a few notches from reality to make us feel good.  

As you progress through schooling system they start to use another marking system called grades and that is even more irrational as it has a built in scale heavily weighted to the positive.  Instead of being graded A,B,C,D,E,F,G,H,I,J,K  They used this more euphemistic version: A+, A, A-, B+, B, B-,  C+, C, C-, D, E.   Where a B could actually mean you really got a mark 5 off the top score.

I think that teachers in their efforts not to disappoint us all have totally f%*@d up our natural internal scoring mechanism…And it’s us researchers are the ones that have to deal with the consequences of all this later in life.

These experiences of being marked have anchored a scoring systems in our psyche that is near impossible to shake clear of in later life when we dish out marks ourselves in surveys.    It badly scars how we mark.  If you ask people to rate brands on a scale of 1 to 10 they nearly all get 7.  If you ask people to rate ads they get an average of 7 if you ask people to rate films 80% of score are between 3 and 4.5 stars = 7.

We so often mark things as 7 where in truth we don’t mean it, or rather when we cannot be bothered to think about it or when an experience is un-contextualized.  This means we have to do a lot more work to actually find out the truth sometimes.

Saturday, 11 June 2016

Overfit

Has this happened to you?

You are running a piece of research and you look at the results from the first 50 respondents and it looks like a really good story is emerging.   You are seeing some big differences in how some of the people are answering some of the questions start to come up with theories as to why.  You get exited and you build a whole story about what the data is saying and all seems to make perfect sense. 

Seeing and spotting trends in data is what we all love to do, in fact that is largely what our brains are set up to do, to spot difference and try an interpret them.  It all too easy to come up with ready answers to explain why say men over 35 would prefer this brand of shampoo or why high income groups like cheese more than low income groups....

You then go away and wait for the full sample to answer the survey and when you get the results back back the data differences you saw initially have all evaporated. The patterns you were seeing were in fact noise that you were treating as signal.  When the noise is statistically accounted for you are left with a sea of dull homogeneous data with little or no stories to pull out. 

Welcome to with world of overfit!

The term literally means "over fitting" theories on data that was not statistically robust enough to validate these theories. 

... And it's incredibly dangerous. Particularly in circumstances where you are researching niche sample groups that are difficult to reach and you end up with a completed survey with not enough sample. This is a particular problem in the world of health care and BtoB resesarch where samples are hard to access.

It's difficult for us to get our heads round just how random random chance is, even with large numbers. 

So what does random look like?

Toss it 50 times and very rarely would you get exactly  25 heads and 25 tails. It will happen only one in 10 occasions roughly.

In fact with 50 coin tosses there is a 60% chance that there will be be more than 20% difference - so the differences in the data looking like this chart below would in fact be the expected norm.


If you had 20 question in the survey you would expect at least 1 of them by random chance to have difference of 50% or more which looks like this....



Here is a summary of the data difference you would expect to see in a survey of 20 questions sampled to 50 people.


Here also is similar data for samples of 100 (sorry not done this for larger samples it a bit of a pain to work out!)



How to be confident you data is reliable? 

Simple trick is to divide it in 2 and seeing if both halves say that same thing. Then do it again 20 times and see how many times its the same. If its the same 19 times out of 20 this is the definition of 95% certainty. The number of times out of 20 times by 5 will determine exactly how reliable your data is.  You can go a step further and divide the data in 4 and then look how often is the answer the same.  If all 4 cells give the same answer you are sitting on some quite robust data.


Saturday, 14 May 2016

The science of visual communication

In my job I conduct a large amount of research, and but also create plenty of presentations. To help design good research, we have access to hundreds of published research on research papers. Yet when it comes to designing presentations or using any form of visuals, we have to rely largely on gut instinct and experience to evaluate what works best. There are plenty of well-established working practices and graphic design experts who are exceptionally good at what they do, but very little research to help us to understand the impact of different graphic design techniques, certainly in the market research arena.

Perhaps one of the reasons is that that graphic designers and market researchers don’t encounter each other very often.
    



A joint quest: researcher and graphic designer
Last year part of the Guardian's digital graphics unit responsible for creating some of the most famous infographics circulated online, formed their own company, the Graphic Digital Agency and happened to move into the same offices as our research team in Westminster and we got talking about infographic design and the lack of research to understand how it works. I was curious to know what they knew about the science of design and I found out they were as curious as me.  So we though, using our experience in conducting research on research and their skills in graphic design to produce the source material this represented a very good opportunity for us to work together to do some experimentation.  We sent out on a joint quest to try and learn more about how visuals really work.

We ended up conducting over 70 experiments and tested over 500 visuals, icons, charts, presentation and infographics on over 10,000 respondents in five countries, one of the most extensive pieces of primary research I think we have ever conducted. The complete findings have been published across two  ESOMAR papers: The quest to design the perfect icon, Puleston J & Sazuki S ESOMAR (2014) & Exploring the use of visuals in the delivery of research data, Puleston J, Frost A, Stuart T, ESOMAR (2014) .  But I thought  I would publish a summary of what we have learnt on my blog site.

Tuesday, 10 May 2016

Researching the different words used by Women & Men

We have recently conducted a small piece of research to explore the differences in the language used by men and women when they describe themselves and other people including.

There are some quite surprising difference especially in the words men and women use on their CV.

This link below is to a survey quiz to highlight some of the most popular words we have identified with the biggest difference is gender usage.  Please feel free to circulate this link.


If you would like to find out more about this peice of research which was conducted for a Women In Research event please to get in touch and I will be happy to share the raw data.  We interviewed 500 men and 500 women in the UK and the USA.



Thursday, 31 March 2016

How to make a good prediction

This is some general advice on how to make a good prediction.

1. Have an intelligent conversation with your gut instinct! 


Gut instincts are incredibly valuable when it comes to making a prediction, the best predictors often heavily rely on their gut instincts, but remember that your gut can be flawed. Your instinct is exactly that, an instinct, so any cognitive or emotional biases you have could impede your predictive success.

The trick is to not rely 100% on what your gut instincts tell you but to always question them: subject them to critical appraisal, think about any biases that might be effecting your objectivity.

Its useful to be aware of  some of the most common cognitive biases, thinking short cuts which can corrupt our metal calculations.

Friday, 20 November 2015

What should we be measuring in brand tracking studies?

…In a nutshell, what brands do you buy and why?


Byron Sharpe et al have fairly convincingly proved that the key health metric of a brand is its total universe of users.

The awareness of the brand, the loyalty of the users of the brand and how much they like the brand are all rather academic constructs as all these measures highly correlate with each other and ultimately with the brand’s universe of users. All can be modeled using a Dirichlet distribution model.

The proportion of people who are brand-aware can be modeled from the proportion that are spontaneously aware of that brand. With X number of total users there will be Y number of loyalists and Z number of people who love and recommend the brand. If users drop, liking, awareness and loyalty levels will drop all in parallel. If you asking about liking of brand you will find we all like the brands we use at pretty equal levels.

To illustrate the point, here is an example of data taken from a quite typical brand tracking study where the statistical correlation between brands purchased in the last 12 months and all the other core metrics measured in the brand tracking study has been calculated. The correlation for nearly every metric is above 0.85 and some metrics in the high 0.9’s.


So you could argue that the only brand equity question really worth asking in a brand tracking survey is: “Which of these brands do you use?”

Wednesday, 14 October 2015

What can researchers learn from film script writers?

If you study the art of film making, it will tell you that a good film script is based around one great question, that grabs your attention from the off and then the story naturally emerges from this and slowly reveals the answer. The question drives the whole story.

Here are some examples:
  • What if every day was the same? GROUNDHOG DAY 
  • What if a nun was made to be a nanny? THE SOUND OF MUSIC
  • What if a really smart innocent person went to prison? SHAWSHANK REDEPMPTION
  • What if dreams & reality were inter-changeable? MATRIX
  • What if there's more to life than being ridiculously good looking? ZOOLANDER
All the books also emphasise how important good narrative structure is to making a great film i.e. films that people want to watch and concentrate on watching from start to finish. Films construct heroes through which the story is told, and these stories needs to adhere to a strict story structure. There are about 7 of these basic story structures, established from a time well before the dawn of film making, in fact the basic structure of storytelling has hardly changed much for thousands of years.

Wednesday, 7 October 2015

Non-evolutionary

Most business evolves in a classical evolutionary way. Through slow mutations in their approach to doing business, which leads to the business being more or less successful and in a survival of the fittest way - the strongest mutated variants win through. The most common way businesses “mutate” is through making a whole series of what is know as kaizen innovations – small baby step improvements and changes to increase the efficiency of a business.

Most kaizen improvement are logical evolutionary steps. If we do this we think we will make more money.

All of life evolves in this way, through trial and error. There are some interesting things thought that can happen if you break out of the evolutionary approach to development. Start to create things that never could emerge as a result of “market forces” of the demands of customers.

What do I mean by this, well the example I would like to use to illustrate this is the “unstable jet”.  

Imagine a bird evolving a really really long beak, the longer the beak in theory, the more efficiently the bird could cut through the air and the faster if could fly. The problem with having a really long beak though is you reaches a point of instability, because a tiny fluctuation in the movement of the tip of the beak or a gust of air in a different direction and the beak could be deflected and instantly act like a sale and the bird would flip over its nose. As a result there are no birds with really really long beaks like this as they would be unstable.

However imagine a bird with a really long beak and a the end of it there was a sensor and small computerized navigation system that could make microadjustments to the direct of the “beak” to ensure that its always in a stable position facing directly into the headwind and not deflected off course by a gust of wind and now you have designed what in theory is a bird that can fly faster because it can cut through the air more efficiently. Unfortunately no bird is likely to evolove this extra step because the solution is “non-evolutionary”. It can never get there by baby step "kaizen" mutations.  It takes a major new "non-evolutionary" improvement to get over the hurdle of an unstable beak. 

 Yet Man has been able to make this non evolutionary improvement that would have been impossible in the natural world and we have designed jets now with exactly this feature.

And this is the type of non-evolutionary form or innovation that I am particularly interested in.

Most of the establish businesses that are killed off are by major disruptive innovations like this – business solutions and leaps of improvement that break the classic evolutionary kaizen business development model. The step changes that existing businesses cannot make because it would result in the total cannibalization of their existing business (making them unstable).

To get there businesses need to take a completely different approach to innovation stop thinking about what would make money and start focusing on what is possible. What would happen if....To think more abstractly about what could happen if this other thing happened.  To cross connect ideas. To build field of dreams. To invest in the connecting points.  To look out for the non-evolutionary leaps.


Wednesday, 7 January 2015

Non-commercial

Imagine if there was no commercial agenda set by your company and you could do exactly what you wanted to do. What would you do?

Saturday, 3 January 2015

2015 the survey design tipping point: change now or pay the price later

I can't change my survey as it will effect all our historical trend data.

This dilemma sits at the heart of most the discussions we have with companies wishing to update their research studies. "I would love to do things differently but I can't!"

And it is also the is the reason why so many of the surveys we look are behind the curve in way of design and questioning techniques,the reason why the average online survey length has crept up from 15 minutes to over 20 minutes over the last 5 years.

Well 2015 is the year in which things are going to have to change.

...and the reason is, our respondents are going mobile.

At the end of  2013 only 5% of people completing our surveys did so via mobile or tablet device, by the end of 2014 that figure has reached 20%. In some lead markets of Asia its already approaching 30% and as an indicator of where things are going, by the end of 2014 more than half the people signing up to our online panels did so via a mobile or tablet device.

What we are starting to see is stark differences in completion rates between those surveys that are mobile compatible and those that are not.

We are going mobile too...

 By the end of this year all our survey respondents are going to have the choice of what surveys they want to complete and every survey that is not mobile compatible will be marked as non-compatible.   As a result, the cost of fielding these non-mobile compatible surveys will start to increase significantly.

This is a change or die moment for many peoples tracking studies.

The days of getting away with a 20 minute+ grid dominated survey are pretty much over.   The dropout rates of these types of longer surveys on respondents completing surveys mobile devices are over 50% , which is simply not acceptable for anyone.





Tuesday, 30 December 2014

10 things I learn in 2014


  1. We convert statistics into emotions: and so the best way to fast track getting your statistics remembered is to emotionalise them!
  2. Our brains are Bayesian decision making engines: by and large designed to work out what choices will make us most happy
  3. A question is a problem that you ask respondents to solve:  it is easy to lose sight of this simple thought. Often we design questionnaires that skirt around the problem we are trying to solve. We ask questions so euphemistically,  we ask questions that are a Chinese whisper away from what we are trying to find out. 
  4. We like to think in different ways:  Researchers like to quantify things, particularly types of people and how they think and consumer.  We have personalities that get classified and we are this type of person or that. The simple fact is that we are all sorts of different types of people depending on the time of day our mood and our circumstance. We all like to think in different ways, its boring the make the same types of decision especially when we go shopping.  the concept of "type" in research is limiting. The same person who liked to try out new types of shampoo  
  5. Scale effects: you scale something up and sometimes different maths starts to apply
  6. If there are infinite number of universes,  then in some universes it is certain that a god will exist (as some of us know it) ..and in others it is certain that a god as we know it will not exist:  A nice thought, that's assuming that there is such a thing as infinity, some physics question this too ... it might, but there is an infinitesimal small chance!
  7. Rating something is inherently a system 2 thinking process:   compared to a binary choice process which is more system 1. The example being you come out of a film your friend asked did you like it and in a fraction of a second you can say yes or no, but likewise if the friend asked you to rate the film this takes more mental processing and anything up to 5 seconds thinking to give it a score.
  8. 16 is a crowd:  Prediction as opposed to market research is not about the numbers of people you ask its about the quality of information available to the group of predictors and their effort and objectivity.  
  9. Unwise crowds:  crown wisdom is a nice idea but it only work in certain rather rare circumstances. Crowd predictions are mostly corrupted by system errors & network cognitive biases. 
  10. Computers using artificial intelligence can now read some of our emotions better than we can:  Artificial intelligence tools are getting so good at reading our emotions by combining various input data sources ranging from how we are typing, the language we are using and the music we are listening to that they can identify traits of depression often months before it is apparent to ourselves









Thursday, 18 December 2014

Meeting Jargon in heard in 2014

These are some of the wonderful pieces of  jargon I heard and noted in various meeting I attended this year...enjoy (and forgive me, I have probably use most of these myself!)

  • Boil the ocean 
  • Spin our wheels 
  • Capability area 
  • Landing it 
  • Resonate with...
  • Next steps
  • Gun up
  • Creating confusion 
  • Lense (wide angle /focused/zoom)
  • Data science unicorns 
  • The unsexy stuff 
  • Slower pace 
  • Focus in on... 
  • Support of the board 
  • Additional resources 
  • Executions logistics 
  • Treat as a priority 
  • Alignment 
  • Asking for permission not asking for permission 
  • How we are going to scale it 
  • Catching lightening in a bottle 
  • CDO 
  • UI
  • Mathmagicians 
  • Tentpoling 
  • Lumascape 
  • Ecosystem 
  • Insights into actions 
  • Data at the centre... 
  • Making a brand more culturally relevant 
  • Triangular across the data 
  • Effecasy 
  • The mandate 
  • Manadated 
  • Implementation 
  • A hard stop 
  • The pillars 
  • The lead sled dog 
  • Skin in the game 
  • In the new world 
  • Analysis paralysis 
  • 90% of data created in the last 2 years
  • Experience based economy 
  • Product service experience 
  • Ubiquitous computing
  • Ambient intelligence
  • Embedded
  • Network 
  • Internet of things 
  • Context aware 
  • Sensor fusion 
  • Personalised behavioural profiling 
  • Adaptive ai learning 
  • Anticipatory predictive analysis 
  • The Google car crash dilemma 
  • Cut off data cut off blood 
  • Hyper personalisation

If you have any nominations do tweet them #mrjargon

2014 market research book list

Coming to the end of the year, I I thought I would share a list of the best books I have read in 2014 that I think other market researchers might like to read.  Now not all of these are new books by any means so forgive me if you have yourself read half of them.

This will make you smarter



This book is a compendium of scientific and philosophical ideas in one of two page essays on quite a remarkable cross section of topics. There are some really exciting thought packed into this book that I think market researcher could make good use of. I think reading it really did make me a little smarter!






Expert Political Judgment: How Good Is It? How Can We Know?


Philip E. Tetlock

Philip Tetlock's thinking has had more influence on how I think about conducting market research than any one person this year. I was introduced to this book by Walker Smith from the Futures Company and I would recommend that anyone who has an interest in the science of prediction should read this book.  Learn that political experts are not quite as good as chimps tossing coins at predicting things!




The Signal and the Noise: The Art and Science of Prediction


I realise this book is a few years old now, and I wish I had read it sooner. There are so many really important ideas stuffed into this book that market researcher can use in their every day research. Its both inspiring and useful.






Strategy: A History


This small thumbnail belies a bloody thick book which I have to admit to not to have read every page of.  It looks at strategy from all sorts of angles from war through to politics and summarizes the thinking of every major strategist in history including the likes of Sun Tzu, Napoleon and Machiavelli.  There is loads of great thinking for market researchers to digest. And probably even more valuable incites for anyone running a business.   It contains a detailed look game theory and the trials and issues with trying to apply strategy in real life. There is some sage advice in this book



Decoded: The Science Behind Why We Buy



This book is a really helps explain the basics of shopping decision making and is a compendium of behavioral economic theory, an important topic for nearly all market researchers to understand - I really like the way it uses visual examples to explain some of the theory making it an effortless read. This book should be on every market researchers shelf.





100 Things Every Designer Needs to Know about People


This book should really be titled, 100 things market researchers designing surveys and presentations should know about people!  ...And everyone involved in either of these task encouraged to read this.   Loads and loads of really clear, sensible advice.








The Curve: Turning Followers into Superfans


I read this after reading a very enthusastic linkedin review by Ray Poynter, thank you!  It persuaded me to buy it. There are some nice radical ideas in here about how to market things by giving things away and at the same at the other end of the scale offering premium high price solutions for the those willing to pay for them.

The Numbers Game: Why Everything You Know About Football is Wrong

Chris Anderson (Author), David Sally (Author)

I rather immersed myself in reading sports stats books this year. The way that data is transforming sporting strategy, there are lessons to be learnt by the whole of the market research industry. As an English person with a love of football, I feel rather a bounden duty to promote the Numbers game which looks at how statistical data has changed how the game is played. I loved this book and I am afraid I bored senseless everyone I knew who had any interest in football quoting incites from it. I also read Money Ball this year too which is the classic opus on how a proper understanding of stats transformed the fortunes of a major league baseball team, it is a great story and lovely read.


Who owns the future?


Jaron Lanier

This book has an important message about the impact of the digital economy on our future I cite from the book directly as it best explains  "In the past, a revolution in production, such as the industrial revolution, generally increased the wealth and freedom of people. The digital revolution we are living through is different. Instead of leaving a greater number of us in excellent financial health, the effect of digital"  Worth a read!





The golden rules of Acting 

Andy Nyman

This is a lovely little book, you can read in one short sitting. Why though do I recommend market researchers read it?  Well not because it teaches you anything about acting more about life and humanity and dealing with failure and the right approach to challenges.  There is not much difference in my mind to going for an audition and going and doing a pitch presentation. I took some heart from reading this book.






Want to see some other book recommendations?  Try this site:

http://inspirationalshit.com/booklist#


Your 2015 recommendations?


Love to hear your recommendations for books I might read in 2015  tweet me @jonpuleston


Questions on trial

Surveys are competing with a billion+ largely more fun things to do online these days and we are reaching tipping point were many people, the young age groups in particular are simply refusing to complete survey because they are too boring.

To have any chance of competing we have to change our approach and I think this starts with taking a long hard look at some of the boring questions we are asking in our surveys.

One of them being this one...

"What brands are you aware of?"

It a question asked in nearly every consumer survey I come across, usually asked both unprompted with an open ended question and then prompted with a closed question set of brand options (twice the work). Its one of those sacred cow set of questions that everyone insists on asking.

Do we actually need to keep asking this question?  Are there not better questions that could be asked in better ways, that make better use of a respondents brain that deliver more useful data?

Here is the case for the prosecution:
  • Its a really dull question: From a respondents point of view these are probably the dullest most cliched question they continually have to answer in surveys. Respondents don't like answering these questions,  they trigger drop out. 
  • Respondents put little thought into their answers: Less than half the respondents you ask this question name more than one or two brand when asked unprompted and well over 20% say don't know. The prompted question often gathers together a random set of clicks from respondents.  
  • Little statistical value:  If the average respondent list 1 or 2 brand and assuming the number of individual brands listed adhere to a Zipf's law style distribution, the most popular brands named much more often than the least popular brands - on a typical sample of 400 respondents there will only likely to be 1 or 2 brands you ever have enough data to work with statistically. 
  • Large error boundaries:  The data error boundaries on this question on most survey samples are of often so large that in wave to wave in brand tracking studies the fluctuations down to pure statistical error are often of an order of magnitude larger than the actual underlying change in brand awareness.  Resulting in a lot of "overfit"
  • Nstand alone value: The data it delivers back is almost always duplicated or can effectively be modeled from answers given to other questions in the same survey.
  • Meaningless metric: Brand awareness information is totally useless in isolation from anything else. What it measures is intangible it's certainly not an accurate measure of purchase behavior for example - Unprompted awareness correlate at around about 0.54 with purchase behavior*, prompted awareness it drops to under 0.2*.  If you specifically want to find out what brand consumers are likely to buy there are other question that are far more effective.  It is also not a measure of how much I may like a brand....
  • I have never see how this information is really used profitably:  It's always the chart that everyone skips past in a presentation. 
The case for the defense:  

Now I have challenged several prominent and respected researchers many of whom are still very wedded to asking this question in survey and asked them why they like to use it.
  • Its a fundamentally important measure: The brand that is mentioned first is the brand that has the most brain neuron connections and associations between the product and the category. So many see it as fundamentally the most important question you can ever ask in a survey.
Now I get this but I am still left with a feeling of so what....if you ask me what brand of chocolate I am aware of  - I will say Cadbury's.  I have been exposed to this brand all my life seen thousands of Cadbury's ads, seen the brand in every confectionery counter I have ever visited.   But I never buy Cadbury's and so what x number of people are aware of a brand.

Now I am open to some other arguments as to why it should be kept if anyone wants to make them, but my judgement verdict is that its a question that if not completely banned, should at least be taxed, in the same way that cigarettes and alcohol are taxed to discourage their usage.

What could be asked instead?

There are a range of alternative ways of directly or indirectly measuring brand awareness that are more interesting and potentially useful.

I find one of the frustrations to answering the what brands  do you recall question is thinking why does it matter and not knowing how many to list, Simply applying a rule to the question that contains the task it in a more meaningful framework for respondents can make it far less dull to answer.

you could ask:
...what are their favorite brands
...which brands they would have in their perfect supermarket
...which brand, if they could only buy one, would they choose to buy a life long supply of... ...which brands they would take to the desert island
...which brand they would invite to a party
...which brands they would recommend to their best friends
...which brand they would invest in
...which brands they think they will still be buying in 5 years time

All these will gather top of mind awareness to some degree or other but are more interesting and purposeful for the respondents to answer and adding in these "rule" can make them more salience and relevance e.g. asking about what brands they would have in their perfect supermarket  it not just just a measures awareness but also intent to purchase.

In head to head experiments we have found we get more responses to these more conceptually fun questions too.

You could turn it into a full blown game by adding to your list of brands some fake ones and challenge respondents to pick out the real brands.  We have used this approach on several occasions, its more fun for respondents and the data you get back is almost identical to prompted brand awareness.  You could also show them a facet of the packaging of a brand and see if they can guess which brand its for, or show them a de-branded ad.

I would also advise jumping straight past the specific awareness question and asking what range of brands they have purchased on there last 10 shopping occasions. The has very high correlation with actual sales c0.8 and again cuts to the heart of the problem, if they are aware of it but not purchased it in any of their last 10 shopping occasions its not on their purchase radar!

If you want to defend the use of this question in your surveys and have got a strong argument for doing so i would love to hear your thoughts.