Kevin Randall at FastCompany pens an interesting piece on the rising tide of sentiment analysis–the players, the technologies, the possibilities, and the current pitfalls. The idea behind sentiment analysis is pretty simple (but the execution is difficult): to identify and code attitudes, whether written or verbal, towards particular topics. The explosion of activity on the web (blogs, social media platforms, etc.) has created an enormous amount of data that typically includes some kind of feeling towards the topic. This is a researcher’s and marketer’s dream–a plethora of opinion from which to mine and analyze. The key, however, is to be able to easily collect, code and analyze that data. The most difficult of these three steps is coding–how do you efficiently designate millions of utterances on the web in terms of their “polarity (positive or negative), intensity, and subjectivity”? Randall notes the initial problems with accuracy as well as other open questions:
Computer deciphering of word meaning is not always accurate and tone can be completely missed. Even the leading vendors acknowledge that the data is 70-80% reliable. For example, we may know that the phrase “quite interesting” means one thing in America, another in Britain, but the computer would see the same meaning. Note some of the long-standing issues with voice recognition technology.
There are questions about how robust or representative the data is. Are a brand’s tweeters the key WOM influencers or are they just a small vocal segment?
Some brands and products may be under the radar for this technology. Yes we love to chat about Apple but do we also regularly, enjoy blogging and tweeting about Charmin or business insurance?
There are conflicting approaches, metrics and offerings; over time a common Microsoft, Google, Nielsen type platform may emerge.
The notion of accurate sentiment analysis is very intriguing, but, as Randall notes, it is far from a finalized technology.
On the one hand, we now have access to an unprecedented about of data about people’s opinions that is in constant flux, constant evolution, and is constantly being updated. In business (and, I would argue, life) the key is lessening your information gaps, reducing the information asymmetries you face. Often times this can be accomplished by finding a way to take the private information people hold (e.g. opinions about a product or brand, their preferences and priorities, etc.) and making it visible. This is the essence of market/consumer research. The current environment makes the collection of that data much easier, especially at high volumes, and more cost effective.
However, the only way to derive usable, reliable information from this ocean of data is to properly code it. If we can develop reliable technology that overcomes some of the current shortcomings we will be in a position to literally visualize the collective mind, and do so real time. That is a very exciting prospect, but one that will be difficult to achieve.