350 out of 36,000: This is how many useful comments Porsche manage to pick out from analysing 36,000 social media comments about their cars. So the cost benefit analysis of this runs a bit short and this was probably the headline news for me from the ESOMAR 3D conference: No existing piece of text analytics technology seems to be capable of intelligently process up this feedback. Every single one of these comments had to be read and coded manually I was shocked. I thought we were swimming in text analytics technology, but apparently most of the existing tools fall short of the real needs to market researcher right now (I spot one big fat opportunity!).
240 hours: This was the amount of time spent again conducting manual free text analysis by IPSOS OTX to process data from 1,000 Facebook users for one project (and from this they felt they had really only scratched the surface). As Michael Rodenburgh from IPSOS OTX put it "holly crap they know everything about us". There are, he estimated, 50 million pieces of data associated with these 1,000 uses that it is possible to access, if the end user gives you a one click permission in a survey. He outlined the nightmare it was to deal with the data that is generated from Facebook just to decipher it is a task in itself and none of the existing data analytics tools we have right like SPSS now are capable of even reading it. There was lots of excellent insights in this presentation which I think deservedly won best paper.
0.18: This is the correlation between aided awareness of a brand & purchase activity measured in some research conducted by Jannie Hofmyer and Alice Louw from TNS i.e. there is none. So the question is why do we bother asking this question in a survey? Far better just to ask top of mind brand awareness - this correlates apparently at a much more respectable 0.56. We are stuffing our survey full of questions like these that don't correlate with any measurable behaviour. This was the key message from a very insightful presentation. They were able to demonstrate this by comparing survey responses to real shopping activity by the same individuals. We are also not taking enough care to ask a tailor made set of questions to each respondent, that gleans the most relevant information from each one of them. A buyer and a non buyer of a product in effect need to do 2 completely different surveys. Jannie senses that the long dull online surveys we create are now are akin to fax machines and will be obsolete in a few years time. Micro surveys are the future, especially when you think about the transition to mobile research. So we need to get the scalpel out now and start working out how to optimise every question for every respondent.
50%: The average variation between the claimed online readership of various dutch newspapers as publish by their industry jic and the readership levels measured from behavioural measurement using pc and mobile activity in tracking as conducted by Peit Hein van Dam from Wakoopa. There was such a big difference he went to great lengths to try and clean and weight the behavioural measurement to account for the demographic skew of his panel, but found this did not bring the data any closer the the industry data but in fact further away. Having worked in media research for several years I am well aware of the politics of industry readership measurement processes, so I am not surprised how "out" this data was and I know which set of figures I would use. He pointed out that cookie based tracking techniques in particular are really falling short of delivering any kind of sensible media measurement of web traffic. He cited the "unique visitors" statistics published for one Dutch newspaper website and pointed out that it was larger than the entire population of the Netherlands.
Note: Forgive me if I got any of these figures wrong - many of them were mentioned in passing and so I did not write all of them down at the time - so I am open to any corrections and clarifications if I have made some mistakes.
3 New buzzwords
Smart Ads: the next generation of online advertising with literally 1000's of variant components that are adapted to the individual end user.
Biotic Design: A technique pioneered by Yahoo that uses computer modelling to predict the stand out and noticeability of content on a web page. It is used to test out advertising and page design and we were show how close to real eye tracking results this method could be. We were not told the magic behind the black box technique but looked good to me!
Tweetvertising: Using tweets to promote things (sister of textervising)
One stray fact about weather forecasting
One stray fact about weather forecasting
Predicting the weather: We were told by one of the presenters that although we have super computers and all the advances delivered by the sophisticated algorithms of the Monte Carlo method, still if you want to predict what the weather is going to be like tomorrow the most statistically reliable method is to look what the weather is like today, compare it to how it was yesterday and then draw a straight line extrapolation! I also heard that 10 human being asked to guess what the weather will be like, operating as a wisdom of the crowns team, could consistently out performed a super computer's weather prediction when programmed with the 8 previous days of weather activity. Both of these "facts" may well be popular urban myths, so I do apologise if I might be passing on tittle tattle, but do feel free to socially extend them out to everyone you know to ensure they become properly enshrined in our collective consciousness as facts!
No comments:
Post a Comment