Kevin Randall at FastCompany pens an interesting piece on the rising tide of sentiment analysis–the players, the technologies, the possibilities, and the current pitfalls. The idea behind sentiment analysis is pretty simple (but the execution is difficult): to identify and code attitudes, whether written or verbal, towards particular topics. The explosion of activity on the web (blogs, social media platforms, etc.) has created an enormous amount of data that typically includes some kind of feeling towards the topic. This is a researcher’s and marketer’s dream–a plethora of opinion from which to mine and analyze. The key, however, is to be able to easily collect, code and analyze that data. The most difficult of these three steps is coding–how do you efficiently designate millions of utterances on the web in terms of their “polarity (positive or negative), intensity, and subjectivity”? Randall notes the initial problems with accuracy as well as other open questions:
Computer deciphering of word meaning is not always accurate and tone can be completely missed. Even the leading vendors acknowledge that the data is 70-80% reliable. For example, we may know that the phrase “quite interesting” means one thing in America, another in Britain, but the computer would see the same meaning. Note some of the long-standing issues with voice recognition technology.
There are questions about how robust or representative the data is. Are a brand’s tweeters the key WOM influencers or are they just a small vocal segment?
Some brands and products may be under the radar for this technology. Yes we love to chat about Apple but do we also regularly, enjoy blogging and tweeting about Charmin or business insurance?
There are conflicting approaches, metrics and offerings; over time a common Microsoft, Google, Nielsen type platform may emerge.
The notion of accurate sentiment analysis is very intriguing, but, as Randall notes, it is far from a finalized technology. Continue reading