Social media pictures represent a rich source of knowledge for companies to understand consumers’ opinions, as they are available in real time and at low costs and represent an active feedback which is of importance not only for companies developing products, but also to their rivals and potential consumers. In order to estimate the overall sentiment of a picture, it is essential to not only judge the sentiment of the visual elements but also to understand the meaning of the included text. This paper introduces an approach to estimate the overall sentiment of brand-related pictures from social media based on both visual and textual clues. In contrast to existing papers, we do not consider text accompanying a picture, but text embedded in a picture, which is more challenging since the text has to be detected and recognized first, before its sentiment can be identified. Based on visual and textual features extracted from two trained Deep Convolutional Neural Networks (DCNNs), the sentiment of a picture is identified by a machine learning classifier. The approach was applied and tested on a newly collected dataset, “GfK Verein Dataset” and several machine learning algorithms are compared. The experiments yield high accuracy, demonstrating the effectiveness and suitability of the proposed approach.
Visual and textual sentiment analysis of brand-related social media pictures using deep convolutional neural networks
Paolanti M.;Frontoni E.;
2017-01-01
Abstract
Social media pictures represent a rich source of knowledge for companies to understand consumers’ opinions, as they are available in real time and at low costs and represent an active feedback which is of importance not only for companies developing products, but also to their rivals and potential consumers. In order to estimate the overall sentiment of a picture, it is essential to not only judge the sentiment of the visual elements but also to understand the meaning of the included text. This paper introduces an approach to estimate the overall sentiment of brand-related pictures from social media based on both visual and textual clues. In contrast to existing papers, we do not consider text accompanying a picture, but text embedded in a picture, which is more challenging since the text has to be detected and recognized first, before its sentiment can be identified. Based on visual and textual features extracted from two trained Deep Convolutional Neural Networks (DCNNs), the sentiment of a picture is identified by a machine learning classifier. The approach was applied and tested on a newly collected dataset, “GfK Verein Dataset” and several machine learning algorithms are compared. The experiments yield high accuracy, demonstrating the effectiveness and suitability of the proposed approach.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.