Showing posts with label Thoughts. Show all posts
Showing posts with label Thoughts. Show all posts

Wednesday, 9 November 2016

How does a polling company find out how many followers the anti-research party has?

A conundrum for you...

Imagine I have set up a completely new political party and in my manifesto I tell my followers not to trust the polls and to slam down the phone on any polling company that tried to call and not answer any surveys.

My party is now effectively invisible to researchers!

How does a polling company work out how many followers this new party has?

Could this phenomenon go to explain the massive miss read in the US election polls?

...Trump painted polling companies as the enemy, it is no wonder some of his followers as results might have refused to engage with them and as a result the polls end up with a hole in their numbers.

This is the conundrum market researchers have to face up to if they want to get to grips with political polling in the future.

We need to find a way of measuring the opinions of those that hide away from expressing their opinions.

My mum spotted a nice solution, at her local church fete during the Brexit campagn a stall was selling 2 varieties of rhubarb, "in" and "out" they sold 26 bundles on in and 28 bundles of out! Job done. 





Here is what just happened

I am writing this on the morning after the US election so I don't think I need to explain the headline....

I am sure there will be more sophisticated explanations emerge over the following day but this is my take to understand the result of the election.

In the run up to the US election out team have been conducting an ongoing series of political polling experiments to understand what has been going on. We have not been polling in a conventional sense, more experimenting with how to measure voting sentiments and voting intention in an attempt to find a better way of predicting the outcome of tight elections.

We have been asking a lot more indirect question about the difficulties people were having in making up their mind, exploring more implicit measures of voter sentiment to see how they stack up with declared measures, getting people to play games to predict the outcome to reveal some of their hidden feelings and we have also focused a lot of attention on asking open ended questions to measure the level of passion in the arguments and look at what reasons people have been using to make their choices. 

Here is what I think has just happened from my perspective from the learnings from all this research.

Trump messaging was far stronger than Hillarys from the get go. It was more coherent. Make American great again. Close the borders. Drain the swamp, Hillary Clinton is corrupt.  We were seeing all this being echoed back over and over again to us in the explanations gave us for why people wanted to vote for him. 

On the other hand Clintons messaging was, extremely weak, in fact almost none of it seemed to stick. Less than 10% I estimate of the reasons cited for voting for Clinton were anything to do with liking her policies, it was nearly all to do with stopping Trump winning. So many people caveated their choice with an explanation that they were picking the lesser of 2 evils.

All the implicit candidate favourability measures we undertook showed us how much Clinton grated on the American public. Flashing pictures of her face solicited up to a -60% negative reaction, even worse that Trump who is known as being a pantomime villain.  Here face and perhaps more importantly her voice did not fit. The majority of American public implicitly did not warm to her. 

What Trump was up against in this campaign was not Clinton, but Trump himself and, well let's not beat about the bush here, his personal sociopathic character traits.

The Trump sexual harassment scandal embodied all this and the misgivings so many people were having about handing him power seriously pegged back Trumps latent momentum in the final month of the campaign as all this news broke out. 

But beneath all this, he was a lot of people's implicit preference,  but they could not express that in opinion polls or even to themselves to  that matter due to the outrage being voiced in the media about his behaviour, but his core message resonated.

Then comes along the FBI email investigation a week or so before the election. This was like pulling out literally a “Trump” card.  What is did was give all the people who latently liked his messaging but were suffering from cognitive dissonance over his character, a strong emotional counter argument for prefering him it gave them something that they could dress up as a lot more significant, it validated all his messaging about his opponent too. It could not have been more perfectly timed.

We  actually saw the change happening in real time as we conducted a large scale research experiment the week before the FBI press release so on the Monday after this we did a follow up piece of research to see what was happening and  Clinton's 7 point lead we had seen the week before literally evaporated overnight. 

The shy Trump supporters were released from the closet so to speak and at the same time all the people who didn't like Trump or Clinton were given a strong reason not to vote for either candidate or stay at home and not vote at all. 

The last minute FBI volte-face was far too late in the day to undo any of this.

What we have learnt from all this that the public's opinion on how they will vote is very emotional process,  similar to the way my daughter makes up her mind about what type of curry she wants to order in our local curry house. She tries on several  choices to see how she feel about them thinks she has made up her mind, changes it, changes it again and then ditches them at the last minute when the waiter is standing over her and decides with 
her gut instinct.

In this case the US gut fancied trying something new, despite some serious misgivings  because the other dish did not seem too appetizing at all they decided to roll the dice.

You can chide the pollsters if you like, but this type of emotionally charged election is almost impossible to predict even a day or so out but certainly its clear you cannot predict an election by simply asking people which way they intend to vote.  


Here are a few other closing thought about why the polls got it wrong....

1. Are the types of people that slam down the phone if a pollster calls and would never think about doing an online survey also the same type of people who might be more likely to have voted for Trump?

3. Clearly Hillary Clinton did not motivate here democratic base to vote in the same way Trump rallied his supporters and so the polls which are often weighted by past voting activity were delivering a miss read

2. Likewise did weighting polls by traditional political allegiances have any relevance in this election

4. Male blue collar workers who voted for Trump in droves are the hardest group to reach with research as they are working

5. There is some evidence in our research that Trump supporters were slower to respond to our online poll invitations and so some short turn around polls might have closed up before all the Trump supports had a chance to register their opinion.


Wednesday, 7 October 2015

Non-evolutionary

Most business evolves in a classical evolutionary way. Through slow mutations in their approach to doing business, which leads to the business being more or less successful and in a survival of the fittest way - the strongest mutated variants win through. The most common way businesses “mutate” is through making a whole series of what is know as kaizen innovations – small baby step improvements and changes to increase the efficiency of a business.

Most kaizen improvement are logical evolutionary steps. If we do this we think we will make more money.

All of life evolves in this way, through trial and error. There are some interesting things thought that can happen if you break out of the evolutionary approach to development. Start to create things that never could emerge as a result of “market forces” of the demands of customers.

What do I mean by this, well the example I would like to use to illustrate this is the “unstable jet”.  

Imagine a bird evolving a really really long beak, the longer the beak in theory, the more efficiently the bird could cut through the air and the faster if could fly. The problem with having a really long beak though is you reaches a point of instability, because a tiny fluctuation in the movement of the tip of the beak or a gust of air in a different direction and the beak could be deflected and instantly act like a sale and the bird would flip over its nose. As a result there are no birds with really really long beaks like this as they would be unstable.

However imagine a bird with a really long beak and a the end of it there was a sensor and small computerized navigation system that could make microadjustments to the direct of the “beak” to ensure that its always in a stable position facing directly into the headwind and not deflected off course by a gust of wind and now you have designed what in theory is a bird that can fly faster because it can cut through the air more efficiently. Unfortunately no bird is likely to evolove this extra step because the solution is “non-evolutionary”. It can never get there by baby step "kaizen" mutations.  It takes a major new "non-evolutionary" improvement to get over the hurdle of an unstable beak. 

 Yet Man has been able to make this non evolutionary improvement that would have been impossible in the natural world and we have designed jets now with exactly this feature.

And this is the type of non-evolutionary form or innovation that I am particularly interested in.

Most of the establish businesses that are killed off are by major disruptive innovations like this – business solutions and leaps of improvement that break the classic evolutionary kaizen business development model. The step changes that existing businesses cannot make because it would result in the total cannibalization of their existing business (making them unstable).

To get there businesses need to take a completely different approach to innovation stop thinking about what would make money and start focusing on what is possible. What would happen if....To think more abstractly about what could happen if this other thing happened.  To cross connect ideas. To build field of dreams. To invest in the connecting points.  To look out for the non-evolutionary leaps.


Wednesday, 7 January 2015

Non-commercial

Imagine if there was no commercial agenda set by your company and you could do exactly what you wanted to do. What would you do?

Saturday, 3 January 2015

2015 the survey design tipping point: change now or pay the price later

I can't change my survey as it will effect all our historical trend data.

This dilemma sits at the heart of most the discussions we have with companies wishing to update their research studies. "I would love to do things differently but I can't!"

And it is also the is the reason why so many of the surveys we look are behind the curve in way of design and questioning techniques,the reason why the average online survey length has crept up from 15 minutes to over 20 minutes over the last 5 years.

Well 2015 is the year in which things are going to have to change.

...and the reason is, our respondents are going mobile.

At the end of  2013 only 5% of people completing our surveys did so via mobile or tablet device, by the end of 2014 that figure has reached 20%. In some lead markets of Asia its already approaching 30% and as an indicator of where things are going, by the end of 2014 more than half the people signing up to our online panels did so via a mobile or tablet device.

What we are starting to see is stark differences in completion rates between those surveys that are mobile compatible and those that are not.

We are going mobile too...

 By the end of this year all our survey respondents are going to have the choice of what surveys they want to complete and every survey that is not mobile compatible will be marked as non-compatible.   As a result, the cost of fielding these non-mobile compatible surveys will start to increase significantly.

This is a change or die moment for many peoples tracking studies.

The days of getting away with a 20 minute+ grid dominated survey are pretty much over.   The dropout rates of these types of longer surveys on respondents completing surveys mobile devices are over 50% , which is simply not acceptable for anyone.





Friday, 20 June 2014

Baby steps into the wearable era of research: ESOMAR DD 2014 Roundup

Compared to other global research events the ESOMAR Digital Dimensions conference is by no means the biggest, it faces competition, without doubt, from more ‘ideas’ driven events, but never the less it is by far and away still my favourite market research event on the global calendar. Now I have to say that because I was chairman this year, but I do feel that despite all the competition, it has reliably proved to be one of the most fruitful sources of new thinking and new trends for the market research industry - I consistently learn so much more at this event compared to the others I attend and this year it was particularly fruitful.

I think that part of its success is down to the consistently high standards ESOMAR sets on paper submission, only 1 in 5 papers get selected and it also demands a lot more robust thinking from its participants. What you get as a result from this conference is a really thoughtful mixture of new ideas, philosophy and argued out science.

This year was one of the strongest collections of papers ever assembled, so much so that the selection committee asked to extend the prizes beyond 1st place. There were 6 major themes that emerged and 1 paper that I think could go on to have a major impact well beyond the boundaries of market research and I returned home with 23 new buzzwords and phrases to add to my growing collection (see other post).

The big themes

1. The Physiological data age: At this conference we witness some of the baby steps being taken into the world of wearable technology; and a prostration by Gawain Morrison from SENSUM who were one of the stars of the event, that we are about to enter the physiological data age.  They showed us a galvanic skin response recording of a 7 hour train journey which revealed the insight that the highest stress point on the journey was not caused by any delays or anxiety to reach the station but when the on-board internet service went down!  IPSOS are one of many MR companies to start experimenting with google glasses and showed us how they were using them to conduct some ethnographic research amongst new parents for Kimberly Clarke. We saw some wonderful footage of  a father interacting with his new born child in such a natural and intimate way it does not take much of a leap of the imagination to realise wearable technology is going to be a big topic in future MR events.

2. The Big Privacy issues looming over these new techniques:   With the rise of wearable devices raises a whole range of new issues surrounding data privacy that was widely discussed at this conference,  Alex Johnson highlighted in his award winning work Exploring the Practical Use of Wearable Video Devices, which won best paper, - the central emerging dilemma - it’s almost impossible to avoid gathering accidental data from people and companies who have not given their consent to take part in the research when doing wearable research. It’s critical for the research industry to take stock of.

3. Developing the new skills needed to process massive quantities of data:  The second big focus of this conference, that Alex Johnson’s paper also highlighted, was the enormity of the data evaluation tasks researchers face in the future, for example processing hundreds of hours of video and meta data generated from wearable devices.  Image processing software is a long way from being able to efficiently process high volumes of content right now. He had some good ideas, to process this type of data. He proposed a whole new methodological approach which centres around building taxonomies and short cuts for what a computer should look for and a more iterative analytical approach.  In one of the most impressive papers at the conference TNS & Absolute Data provided an analytical guide to how they deconstructed 20 million hours of mobile phone data to build a detailed story about our mobile phone usage, that could be utilised as a media planning platform for the phone – the research battle ground of the future is surely going to be fought on who has the best data processing skills.

4. De-siloed research techniques: I wish I could think of a better simple phrase to describe this idea as it was probably the strongest message coming out of the ESOMAR DD conference - the emergence of a next generation class of more de-siloed research methodologies, that combined a much richer range of less conventional techniques and a more intelligent use of research participants. Hall & Partners described a new multi-channel research approach that involved a more longitudinal relationship with a carefully selected small sample of participants where across 4 stages of activity they engaged them in a mix of mobile diary, forum discussion and conventional online research - challenging them to not just answer questions but help solve real marketing problems; Millward brown described a collaboration with Facebook where they mixed qual and mobile intercept research and task based exercises to understand more about how mobiles are used as part of the shopping experience;  Mesh Planning described how they integrated live research data with fluid data analysis to help a media agency dynamically adjust their advertising activity; IPSOS showed us some amazing work for Kimberly-Clarke that spanned the use of Facebook to do preliminary qual, social media analysis, traditional home based ethography, and a new technique of glassnoraphy. What all these research companies demonstrated was that decoupled from the constraints of convention, given a good open brief from a client and access to not just the research data that the research company can generate but the data the client has themselves we saw some research companies doing some amazing things!

5. Mining more insights from open ended feedback:  Text analytics in infancy focussed on basic understanding of sentiment but 3 great papers at the event showed how much more sophisticated we are becoming at deciphering open ended feedback.  Examining search queries seems to be a big underutilised area for market researcher right now and KOS Research and Clustaar elegantly outline how you could gather really deep understanding of people’s buying motivations by statistically analysing the search queries around a topic.  Annie Pettit from Peanut Labs, looking at the same issue from the other end of the telescope, showed how the suggestions to improve brands and new product development opportunities could be extracted from social media chatter by the careful deconstruction of the language they used to express these ideas.  And Alex Wheatley, in my team at GMI, who I am proud to say won a silver prize for his paper, highlighted just how powerful open ended feedback from traditional market research surveys could be when subjected to quant scale statistical analysis, rivalling and often surpassing the quality of feedback from banks of closed questions.

6. Better understanding the role of mobile phones & tablets in our lives: We learnt  a whole lot more about the role of mobile phones and tablets in our lives at the conference, some of it quite scary.  We had expansive looks at this topic from Google, Yahoo and Facebook.  AOL provided some useful “Shapely value” analysis to highlight the value of different devices for different tasks and activities, it demonstrated how the tablet is emerging as such an important “evening device” , its role in the kitchen and bedroom and how the combination of these devices opens up our access to brands.  We learn how significant the smart phone is when we go retail shopping for a combination of social and investigative research reasons. We learn about the emergence of the “Google shop assistant” many people preferring to use google in shops to search for their shopping queries than actually ask the shop assistants and how we use the phone to seek shopping advice from our friends and how many of us post our trophy purchases on social media.

The impact of technology on our memory

The paper that had the single most impact at the conference was some research by Nick Drew from Yahoo! and Olga Churkina from Fresh Intelligence Research showing how our use of smart phone devices is really impacting on our short term memory – we are subcontracting so many reminder tasks to the technology we carry around with us that we are not using our memory so actively and this was demonstrated by a range of simple short term memory test correlated with mobile phone usage found the heavier smart phone users performing less well. The smart phone is becoming part of our brain!  This obviously has much bigger implications outside of the world of market research and so I am sure we are going to hear a lot more about this topic in the future.

Scary thought, which made the great end session by Alex Debnovsky from BBDO about going on a digital detox all the more salient.  I am going to be taking one soon!

Tuesday, 6 November 2012

The 4 killer stats from the ESOMAR 3D conference

I was only able to attend one day of this conference, for me without doubt this is the most useful research conference of the year and so I am sorry, I am only able to give you half the story, but  here is what I brought back with me, 4 interesting stats, 3 new buzzword and 1 stray fact about weather forecasting.

350 out of 36,000: This is how many useful comments Porsche manage to pick out from analysing 36,000 social media comments about their cars. So the cost benefit analysis of this runs a bit short and this was probably the headline news for me from the ESOMAR 3D conference: No existing piece of text analytics technology seems to be capable of intelligently process up this feedback. Every single one of these comments had to be read and coded manually I was shocked. I thought we were swimming in text analytics technology, but apparently most of the existing tools fall short of the real needs to market researcher right now (I spot one big fat opportunity!).

240 hours: This was the amount of time spent again conducting manual free text analysis by IPSOS OTX to process data from 1,000 Facebook users for one project (and from this they felt they had really only scratched the surface). As Michael Rodenburgh from IPSOS OTX put it "holly crap they know everything about us".  There are, he estimated, 50 million pieces of data associated with these 1,000 uses that it is possible to access, if the end user gives you a one click permission in a survey. He outlined the nightmare it was to deal with the data that is generated from Facebook just to decipher it is a task in itself and none of the existing data analytics tools we have right like SPSS now are capable of even reading it. There was lots of excellent insights in this presentation which I think deservedly won best paper. 

0.18: This is the correlation between aided awareness of a brand & purchase activity measured in some research conducted by Jannie Hofmyer and Alice Louw from TNS i.e. there is none. So the question is why do we bother asking this question in a survey? Far better just to ask top of mind brand awareness  - this correlates apparently at a much more respectable 0.56. We are stuffing our survey full of questions like these that don't correlate with any measurable behaviour.   This was the key message from a very insightful  presentation. They were able to demonstrate this by comparing survey responses to real shopping activity by the same individuals. We are also not taking enough care to ask a tailor made set of questions to each respondent, that gleans the most relevant information from each one of them. A buyer and a non buyer of a product in effect need to do 2 completely different surveys. Jannie senses that the long dull online surveys we create are now are akin to fax machines and will be obsolete in a few years time. Micro surveys are the future, especially when you think about the transition to mobile research. So we need to get the scalpel out now and start working out how to optimise every question for every respondent.

50%: The average variation between the claimed online readership of various dutch newspapers as publish by their industry jic and the readership levels measured from behavioural measurement using pc and mobile activity in tracking as conducted by Peit Hein van Dam from Wakoopa. There was such a big difference he went to great lengths to try and clean and weight the behavioural measurement to account for the demographic skew of his panel, but found this did not bring the data any closer the the industry data but in fact further away. Having worked in media research for several years I am well aware of the politics of industry readership measurement processes, so I am not surprised how "out" this data was and I know which set of figures I would use. He pointed out that cookie based tracking techniques in particular are really falling short of delivering any kind of sensible media measurement of web traffic. He cited the "unique visitors" statistics published for one Dutch newspaper website and pointed out that it was larger than the entire population of the Netherlands.

Note: Forgive me if I got any of these figures wrong - many of them were mentioned in passing and so I did not write all of them down at the time - so I am open to any corrections and clarifications if I have made some mistakes.

3 New buzzwords

Smart Ads: the next generation of online advertising with literally 1000's of variant components that are adapted to the individual end user.

Biotic Design: A technique pioneered by Yahoo that uses computer modelling to predict the stand out and noticeability of content on a web page. It is used to test out advertising and page design and we were show how close to real eye tracking results this method could be. We were not told the magic behind the black box technique but looked good to me!

Tweetvertising: Using tweets to promote things (sister of textervising)

One stray fact about weather forecasting

Predicting the weather: We were told by one of the presenters that although we have super computers and all the advances delivered by the sophisticated algorithms of the Monte Carlo method, still if you want to predict what the weather is going to be like tomorrow the most statistically reliable method is to look what the weather is like today, compare it to how it was yesterday and then draw a straight line extrapolation! I also heard that 10 human being asked to guess what the weather will be like, operating as a wisdom of the crowns team, could consistently out performed a super computer's weather prediction when programmed with the 8 previous days of weather activity. Both of these "facts" may well be popular urban myths, so I do apologise if I might be passing on tittle tattle, but do feel free to socially extend them out to everyone you know to ensure they become properly enshrined in our collective consciousness as facts!

Sunday, 4 November 2012

Big data and the home chemistry set


Are we all Dodos?   I heard a couple of people tell us at the ESOMAR 3D conference that we are perilously close to extinction,  that we market researchers are dodos. In fact this has been a bit of a common theme at many of conference I have attended in the last few years a prediction of the terminal decline of research as we know it. The message is that our industry is gonna be hit by a bus with the growth of social media and the big boys like Google and Facebook and IBM muscling in to our space. We are also in many parts of the world facing tough economic times and tightening budget.

Yet despite all this it appeared that this was the best attended 3d conference ever, and it's not just this isolated conference either. I have been going to research conferences all around the world over the last year and they all seem to be seeing growing numbers of attendees and all I can sense from these conferences and particularly at this event, is an industry brimming with confidence and ideas.

So are we all putting on a brave face? Are we naively sleep walking into the future?   I don't think so...

Thursday, 1 March 2012

Evolution of Advertising v Survey design


Designing surveys is without doubt a creative process similar in many ways to other creative commercial art forms like advertising and presentation design.  I feel there is something we can learn from looking at the evolution of these other sister creative processes...

Wednesday, 22 February 2012

Honesty of responses: the 7 factors at play



I read this very interesting post by Edward Appleton about the authenticity of peoples online behaviour published on the Greenbook blog. http://www.greenbookblog.org/2012/02/01/how-authentic-are-we-online/

The authenticity of online respondents is a very interesting philosophical question and is an area we have been looking at recently too - albeit in perhaps in a more literal way...

We have been examining ‘honesty’ within the online survey environment, by comparing answers to questions where we hold known norm figures about behavioural activity and building up a picture of levels of honest answering to different types of questions.

Monday, 7 November 2011

Editorial: Pop Science Books


Reg Baker in his inevitable style posted a wonderfully forthright blog piece (click here to read) bemoaning the industries taste in books, pointing out that a great majority of the books in the list of nominees for the books that had the most transformative impact on market research were Popular Science and highlighted how unscientific some of these types of books are, championing potentially totally bogus theories that have not been subjected to proper scientific scrutiny. 

Now I can total understand his emotional reaction to this. I have exactly the same response when I go into health food shops and see all homoeopathic remedies on the shelf and am horrified that they can get away with their often totally bogus healing claims that have not been subjected to any rigorous medical trials.

But I have a few points on this that I thought I would share:

1. Books are not apples:  and in most cases not in the class of homoeopathic style remedies, one or two bad ones do not rot the whole crop.  Whilst there are some very bad pop science books, there are also some really brilliant ones that help, as Tom Ewing has pointed out, eloquently digest and explain often very complex subject matter in a clear and understandable way. To draw over riding conclusions about pop science books on the bases of analysis of parts of the sample is in itself is bad science.

Friday, 4 November 2011

What is the next big thing?

I asked the panel of judges of the my research transformation awards, who were made up of industry thought leaders and innovators from across the global market research industry, to make their prediction of what they thought would be transforming market research in the future. There were some very interesting and intelligent thoughts and so I thought I would share these predictions with you:

Friday, 23 September 2011

ESOMAR Congress Award for Best Methodology Paper


I am very pleased to have won the ESOMAR best methodology paper award at this years annual congress along with Deborah Sleep from Engage research for our game experiments exploring how to put the fun into surveys and the impact it can have.
A thank you to the company that pioneered gamification

Monday, 6 June 2011

Most Thumbed Books


I afraid I have got a terrible habit of turning over the corner of pages of books when I come across something I think is interesting and scribbling notes, my girlfriends mum Alison who is a book collector would be horrified to hear about this I am sorry but, it is quite a good way of telling retrospectively how useful the book is.

This is a list of my top most thumb-nailed/annotated books and I thought this could be the start of a quest to find the most useful books for market researchers to read…

Friday, 25 March 2011

ARF Great Minds Quality in Research Award 2011

I am pleased to announce that my team at GMI intereactive has won the inaugural ARF 2011 Great Minds, Quality In Research Award for the work we have been doing with Mintel to re-engineer their online consumer brand tracking surveys.


These surveys were re-designed using QStudio our in house flash survey design platform, as a replacement for  existing HTML survey and we managed to dramatically improve both the consumer experience, reducing drop-out by 50% and also improve the quality of data, increasing average click count by 20%.

We will be publishing a full case study explaining the journey we went through to re-engineer these surveys soon.

But in the mean time my team is basking in the glory of winning this award!

 

Sunday, 13 March 2011

Sexy Questions, Dangerous Results?

At the recent NewMR Festival Bernie Malinoff delivered a presentation highlighting some of this issues surrounding the use of more "sexed up" flash question formats in surveys and have noted that this has stirred up quite a bit of follow up commentary and debate in the last few weeks.

As part of these experiments he showed an example of the negative impact of an over elaborate confusing flash format question can have on data quality and the willingness of respondents to take part in future surveys.


Well I wanted to contribute to this debate and so here are my thoughts...