Pages

Thursday, 15 May 2014

Social media listening vendors ALL have a responsibility to push for higher quality

A few months ago I happened to see a short social media insight report, written by a large, highly respected global research agency, for one of the world’s most iconic brands. It was very brief (5 slides in total) and formed part of a wider research report.

I was embarrassed for the vendor (not my own company, I hasten to add!). In those five slides were several claims so patently wrong that you wonder if anyone had their head screwed on when the report was written. They started by claiming that 99.9% of comments made on Facebook originated in the US – and that global mentions had a very heavy bias towards America as well.

They went on to paste in some automated sentiment charts which claimed that in some markets, social media reaction to the client’s highly entertaining, engaging promotional campaign was >97% neutral.

They also claimed that a sudden spike in online mentions of this major, engaging, global consumer campaign was due to coverage in a minor B2B magazine discussing a particular aspect of the production.

All of this – along with some other rather spurious claims – in five slides, lest we forget.

Let’s forget about the actual numbers for a minute. What concerns me is that the exec who wrote the report clearly never bothered to think about what the metrics meant – or to run a simple common sense test. Nor did the person who signed off the report. (It doesn’t reflect well on the client, either; did they not think to push back and ask what these numbers meant?) By all means report the numbers in good faith as provided by the tool you are using…but for goodness’ sake provide a footnote or caveat explaining the limitations. If reported “as fact”, anyone with an ounce of sense can rebut your findings.

Some basic understanding of how social media monitoring tools work can help explain those anomalies. These tools do their best with location detection – but it’s complex and far from easy to get right, and also platform specific. Facebook barely give away any metadata – so in most cases monitoring tools simply pick up the fact that Facebook.com is registered in the US and run with that. Similarly, automated sentiment tools tend to dump data in the “neutral” bucket if they aren’t sure – which depending on the dataset and language can often mean that almost everything is marked up as being neutral. As for the claim about the B2B magazine…I can’t explain that without seeing the raw data, but I’d imagine it’s due to duplicate mentions in the data.

I cite this specific example because I was frankly appalled at what a shoddy job this highly respected agency had done. But it’s representative of an endemic problem with poor-quality social media insights and monitoring – rubbish being peddled by technology suppliers and agencies is being met with client-side ignorance, resulting in an acceptance of poor findings…until somebody more senior does a review, realises the findings from social media are weak and/or unreliable, and blames the approach in general rather than specific failings. All this leads to a widespread mistrust in social media listening/insights. The damage doesn’t need to be done; it does need a little common sense, a willingness to go further than merely pasting charts directly from a tool without some sort of sense checking and interrogation of the data where appropriate, and some basic caveating and management of expectations. Most anomalies can be explained.

Social media research is a crowded space, and competes with many other emerging techniques for a share of limited client budgets. It is incumbent on all suppliers to push for better standards – as otherwise the mistrust can only grow and buyers will take their money elsewhere.