Debate on the ethics of social media research has flared up in recent months with some eminent names taking diametrically opposed points of view.
A good starting point is the lively debate surrounding Brian Tarran's excellent post on Research Live. There have also been a couple of good posts on the Digital MR blog recently which address the pertinent
issues head on. They are clearly worried that new guidelines will restrict their ability to do their job effectively, and leave them vulnerable to providers from non-traditional research backgrounds who may not be subject to the straitjacket of a code of conduct, and therefore be able to provide research solutions quicker and more cheaply, which is definitely the trend. Their worries are certainly valid.
My own take on it is this. The principle of informed consent should still be the starting point. There are a lot of people making loud noises about social media research being "different" from traditional market research. This is true...up to an extent. But my worry is that the motivations for wanting to water down the restrictions on data usage are business ones rather than ethical ones. "If we restrict ourselves then there are non-MR companies out there who will move into our space" simply does not wash as an excuse for lowering standards.
Ray Poynter has made a series of thoughtful posts on the issue and neatly breaks down the issues. In August he wrote:
"The benefits of traditional market research ethics were that they allowed some exemptions to laws (e.g. data protections laws, laws about multiple contacts, laws about phoning people who were on ‘no call’ lists), increased public trust, and allowed market research to get close to a scientific model – for example to use concepts such as random probability sampling and statistical significance. Complying with codes of ethics incurred extra costs, but they also brought commercial benefits. The ‘proper’ market research companies could do things the non-research companies could not - so there was a commercial argument in favour of self-regulation, codes of conduct, and professional conduct bodies."Why can't this continue? Annie Pettit reported that Jillian Williams from the Highways Agency, said that anonymity is important to clients as they will take the flak rather than the research industry. Ray then appears to contradict himself slightly by saying "If market research companies abide by the old ethics, in particular anonymity and informed consent, they will not be able to compete for business in most areas where market research is growing. This is because there will be no commercial benefits that will accrue to sticking to rules and ideas that nobody else does." Surely the majority of clients, if they are looking for a genuine market research study, will want to stay firmly within the "rules" whatever they might be. There was an almighty stink when Nielsen Buzzmetrics were found to have scraped a healthcare forum that was ostensibly private. I actually had some sympathy for them - they were exploring new ways of collecting data, which in itself is quite legitimate - they'd just made a mistake in the execution and hadn't thought hard enough about the wider implications. They took the rap rather than the end client that time, but no client wants to be caught up in a grubby web scraping scandal.
Anonymity is a sociological issue that's very a la mode - there's an interesting post on the ever-excellent Face blog about current trends for real names versus pseudonyms; meanwhile debate rages over Google+'s insistence on real names. What about agencies using monitoring services such as Sysomos or Radian6 or in-house tools? These generally provide the capability to drill down to individual posts, tweets and so on, which can be sent directly to the end client. Perhaps some sort of deals could be set up with the dashboard providers whereby data is automatically anonymised in certain situations. And what about client-side monitoring, which may be informal reputation management/PR or a more in-depth research project. We must be careful not to set guidelines that are restrictive merely because the technology is so good. The principles should apply no matter what fancy new algorithms (buzzword...ugh) are created.
There is also a difference between qualitative and quantitative data. There is an enormous gap between a qualitative study which drills down to individual tweets, forum posts or Facebook status updates and sends them - warts, personal details and all - to the end client, and a large-scale overview of aggregated sentiment-analysed anonymised data which may say nothing more than "there has been a 17% uplift in sentiment from Yorkshire women on Twitter towards the value for money of Fabreze in the last 6 months" or whatever. (What is Fabreze, by the way? It's something which I know my girlfriend spends money on and is almost certainly totally unneccessary - beyond that I haven't got a clue).
The next question over anonymity surrounds platforms. Bloggers, for example, are posting opinions which they want to be heard; furthermore, bloggers generally have an easy choice whether to remain anonymous or not. Many do, others are quite happy to be identifiable. In my book they're about as close as you can get to "fair game". Forums are somewhat similar. At the other end of the scale, you have Facebook; I would hazard a guess that many people whose profiles are set to public are actually unaware of the fact, and have simply been confused by Facebook's ever-changing T&Cs, not to mention their tendency to play fast and loose with privacy. Add the fact that Facebook profiles are usually in real names - and easily identifiable with photos and so on - and this adds up to an ugly mixture of possibly unwanted intrusion combined with ignorance of the fact. A far cry from the "informed consent" principle if researchers start harvesting their data for business purposes.
Then there are idiosyncracies of the social networks. Should there be a difference between the attitude to privacy of someone saying "I wish Nature valley cereal bars were sweeter" and "I wish @NatureValleyUK cereal bars were sweeter"? Is the second option crying out for attention - by researchers?
Michalis Michael from Digital MR says
"Finally a specific minor detail which is most important from a DigitalMR perspective is this: when using quotes in MR reports, we (MR agencies) should not be asked to mask the handle/meta data of a person who posted a comment on a public website – if that website states that posted comments can be viewed by anyone."I think this depends on what is being done with the data. If the data is quantitative then I believe it should be anonymised - at least before it reaches the end client who needs to make the business decisions that follow the research. For qualitative data perhaps another set of rules should apply;
Ultimately I suppose the question needs to be asked "what are the purposes of these ethical codes anyhow?" I've even heard people criticising the Data Protection Act itself - this smacks of tobacco companies criticising smoking regulations. The Data Protection Act was drafted to bring UK law into line with EU privacy directives and the European Convention on Human Rights. These are fundamental directives; they are universal. They provide for people to be able to live their day-to-day lives in a normal way. They enshrine into statute principles of common decency which are inherently part of human nature. Thanks to UK implementation such as the Data Protection Act and Human Rights Act, we are able to do this. The Code of Conduct must use these principles of common decency as its starting point, and leave "but other people are doing it" wheedles to the minor details. The ever-excellent Annie Pettit speculated the other day that a lack of grounding in the "old" ethical MR principles has led to a slackening of attitudes towards privacy. This sounds very plausible, but a lot of it seems just to be a frustration with, or fear of, not being able to work efficiently, particularly if there is "competition" out there coming from a different background who will cheerfully sweep up the work without having to worry about pesky obstacles like common decency.
All this still doesn't quite square with the fact that this social media data is publicly available, sitting there for the world to see, and common sense would seem to dictate that it would be daft to deliberately close our ears to mountains of conversations that are taking place in the public domain. It is undeniable that it is impractical to contact thousands of people individually and ask them whether the sentiment expressed in their Facebook status yesterday may be used for market research purposes. It is also unlikely that many people will feel there's much of an intrusion of privacy from Jack Daniel's picking up on the fact that someone has publicly moaned about it being too expensive, and using that to influence their pricing stategy. But it must be done in such a way as to minimise disruption to people's lives and not fuel speculation that businesses are running slipshod over personal data. Is there a difference between "private" and "personal"? I think so, and perhaps it's a definition that needs to be made explicitly. In general we may need to re-think the "informed" concept and define in what situations "informed" means "explicitly told personally".
I think there are direct parallels between the issues faced by social media researchers, and the police and the Regulation of Investigatory Powers Act (RIPA): for intrusive "directed surveillance" authority from RIPA is required - because that involves targeted "stalking" if you like, of a particular person. You also need RIPA authority for similar work online. But there's no requirement for a RIPA for simple day-to-day casual monitoring. If an officer in plain clothes spots someone doing something he regards as suspicious, there's no need for a court authorisation to discreetly follow that person down the road to find out what he's up to.
As Steve Cooke of Digital MR points out, it is true that social media listening is different to other forms of social media research such as communities. But offline ethnography is subject to pretty strict controls and to informed consent principles. Social media conversations - even "person to person" conversations such as @messaging on Twitter - may be in the public domain, but any offline conversation in public is monitorable if you have a big enough pair of ears. Social media listeners must be careful that the sensitivity of their "ears" doesn't mean they abuse their power. Perhaps there is a case for abandoning long-standing principles - but it shouldn't be merely for convenience purposes.