Pages

Wednesday, 8 December 2010

NewMR Festival - some thoughts on the daytime session

I did actually set my alarm for 4am with the intention of catching some of the Aussie/Far East sessions of the NewMR festival, however willpower (or lack thereof) won the day and I could only haul myself out of bed in time for the 9am GMT sessions.

Brainjuicer's John Kearon kicked things off with a presentation on his "research robots" or DigiViduals. I'd already seen a presentation online on the same subject, but preferred this new one. Basically the concept is that you create a virtual persona, consisting of any attributes you like: behavioural traits, attitudes, tone of language, personality types, lifestyle choices...with or without more traditional sampling attributes like demographics. With your subject created, you go and scan various forms of social media (Kearon always starts with Twitter, but it can go to shopping sites, forums, YouTube...) looking for REAL people whose personal characteristics, as evidenced by the content they have created, "match" our virtual person. Then, you can simply lift content from those people and analyse the relevant parts to your study.

This is a brilliant concept. Sampling can move away from the old "middle class mothers who read magazines" to groups who share characteristics that are much more tactile. I wondered about the volume of data that you'd have to go to before you'd get people who form a "close enough match". Kearon pointed out that while much of the work is done automatically by the research robots, there's still a lot of manual data cleaning to be done.

I imagine in practice there is some sort of "threshold" that people have to meet to be counted in. For example, perhaps they meet at least 60% of the attributes, or else they are 40% more like our digividual than the "average" person.

The real beauty in this is that your digividual can be a totally artificial construct, not based on any real people at all; in fact, it could be an experiment to find a type of person who you don't know exists. The potential for discovering new or niche markets is endless.

Talking of artificial situations, this led nicely into Tom Ewing's presentation comparing research methods with gaming. In the last year or two, more and more commentators have predicted that online gaming will really take off to new levels in the next few years thanks to the social side. All sorts of games - whether web-based, console-based or whatever - have had a new lease of life thanks to the social aspect. Ewing mentioned FourSquare as the ultimate example (I'm only just about to get my first posh phone so I barely know anything about it!); I was surprised he didn't mention Second Life (does anyone actually play that any more? You heardly hear about it these days).

Ewing rattled through a series of nice analogies - but there was a linear theme about showing how research can learn from the best games which keep their players entertained and engaged. He pointed out that a game like chess, whose mechanics are simple and dull, has millions of possible game scenarios, which quickly become complex and involving, requiring a lot of thought and effort on the part of the player (or respondent!). He also pointed out that different people have different motives for playing games, and that good game designers can take this into account; similarly, research respondents have different reasons for giving up their time, and the canny research designer will bear this in mind and try to take advantage.

He makes the point that Sonic the Hedgehog would be a dull game if there was a constant progress bar! However, the concept of levels in games means different things to different people and a sense of achievement (and therefore the effort that goes in to fulfil that achievement) varies from person to person. Monopoly playing styles also vary - people's approach to risk results in very different ways of taking the game on.

Ewing also showed he similarities between gaming and research like simple mobile tasks/apps and also community building. While the analogies came thick and fast, the presentation was full of real-world suggestions for ways that researchers could actually go away and make their projects more interesting for respondents tomorrow.

"Gamey" was how the next presenter, Jon Puleston, described some projective techniques and again this presentation was full of practical ideas of how to improve data quality. He recently undertook a study showing increases in respondent productivity as a result of changes made to online survey designs. Imagery and snappier introductions both made a significant difference, but most interesting were the increase in data quantity/quality from using more interactive, projective techniques. One in particular (where researcher and respondent trade ideas one-for-one) was shown to be particularly effective, as was the game of "put yourself in someone else's [the client's?] shoes..." A very nice presentation.

Completing the first mini-session was Graeme Lawrence of Virtual Surveys. His presentation seemed to have less of a structured narrative, but was no less interesting for that. It was all about "not just listening"; the point that successful "NewMR" needs to be a mixture of large-scale, passive listening/monitoring ("why ask some when you can listen to all?") - but also more proactive asking of questions. I suppose this must vary depending on the subject - there are some areas where there are vast volumes of data already out there, but others where respondents need to be prompted and pushed. I suppose there's less noise to eliminate once people have more of a focus - at the expense of things being a bit less natural (looking forward to Mark Earls's keynote later - my rather verbose review of his book here). One example he gave showed some data on "where else" on Facebook fans of a particular page go - does anyone know what tool was used to get that insight? He gave examples of Facebook fans of Next and H&M providing opinions and insight - it occurred to me that here you are restricting yourself to brand loyalists. It doesn't necessarily work for all brands, either: people may be shy to become Facebook fans of a feminine hygiene product or political party, for example.

After a short coffee break, Annelies Verhaeghe gave a terrific talk on research using social media. I loved her initial analogy of a house of cards - companies are throwing themselves into social media without having a clue about best practice, then getting surprised when things go catasrophically wrong. My current line of work is closer to PR than MR but the facepalm horror stories come thick and fast. She quickly moved on to the issue of representativeness of online and NewMR techniques - a subject dealt with at some length by Ray Poynter in his excellent Handbook of online and social media research. Her main point was that we don't know who is talking. Real people become personae, defined by their content and personalities rather than their demographics. But haven't we heard something like that before? It's all about John Kearon's digividuals again. The sampling goalposts haven't been taken away, just moved along. She also talked about the fact that most sampling online is convenience sampling, and touched on issues of data quality (for example content that is "quoted" or duplicated). There was also a very nice graphical representation of data volume vs sentiment for a particular topic.

Rijn Vogelaar followed with his take on opinion leaders or "Superpromoters" as he calls them. He divided thoughts up into conscious, subconscious, and brand opinions. Personally I found myself a little skeptical - for a start I'm not convinced that there are an elite few brand evangelists who shape community opinions, but also because I'm not convinced that the opinions of the blind optimists, the hardcore fans, are necessarily the most important: aren't the drifters, the disloyal, and the indifferent, of more interest?

Rich Shaw finished up the second mini-session with a presentation about the "hacker ethic". I must admit I missed most of this - initially distracted by "NewMR chatup lines" on Twitter, and then by the gas man knocking on the door. I'll come back to it.

Academic researcher Dr Agnes Nairn gave a great overview of the ethical issues surrounding new research techniques in a talked entitled Oi, you took that without asking! Her own work is concerned with children, and she brought up practical concerns about getting the appropriate level of consent from both the child and their parents (by phone: consider mebeingmymum@gmail.com!!!) There is also an issue of data protection: I was pleasantly surprised at the level of confidence in the police dealing with personal data, but market researchers were at the bottom of the trust pile - way behind bankers.

She moved on to the central issue of informed consent. The old rules have been thrown out of the window where social media monitoring is concerned. It is difficult to inform people for whom you have no point of contact (for Facebook, forums etc) or details (Twitter), particularly if you are collecting data on a very large scale. The level of intrusion also varies on a sliding scale: there is a world of difference between taking one person's personal essay, quoting it in client meetings and using it to influence decisions on he one hand, and merely using a sentiment analysis tool to add an opinion to a set of positive/negative sentiment aggregate data at the other. I also have some sympathy with the view of Mark Zuckerberg who caused a storm when he said that people in the Facebook generation are less bothered about privacy and more inclined to open up their lives in public online; yes, of course he has an ulterior motive, but I get the feeling that he's mostly right despite the noisy protests of various pressure groups.

Henrik Hall's chat with Ray Poynter wasn't really relevant to me, but Bernie Malinoff's presentation on the pitfalls and differences between different approaches to online surveys was interesting. Incredible that two similar methodologies, with only some small tactical differences, can give completely different results. It's the sort of thing the research industry needs to tackle quicksticks to avoid being seen as a waste of time and money by clients. Ian Ralph's practical talk on smartphone research was also interesting, although as a non-practitioner I find these highly tactical discussions a struggle to keep up with.

Betty Adamou finished with a brilliantly rousing call to arms for Facebook research. She made some bold claims about young people - email is as dead as the CD, for example - and pointed out that researchers must make the effort to reach out to respondents, not the other way round. She made some great points about the sorts of times and places respondents might want to take on a piece of research: at a bus stop, for example, or waiting for a late-running boyfriend. I'd love to see some "situation-based" research. She also said that researchers need to be more flexible about adapting to the way young people behave, especially online - by embracing things like txt spk and smilies.

The evening session features one of my recent heroes, Mark Earls, and lots more goodies: I can't wait. If it's half as good as today's session then it'll be a very enjoyable few hours.

I have cross-posted this on the NewMR site as a blog post.

No comments:

Post a Comment