Abstract

Recently text-based sentiment prediction has been extensively studied, while image-centric sentiment analysis receives much less attention. In this paper, we study the problem of understanding human sentiments from large-scale social media images, considering both visual content and contextual information, such as comments on the images, captions, etc. The challenge of this problem lies in the "semantic gap" between low-level visual features and higher-level image sentiments. Moreover, the lack of proper annotations/labels in the majority of social media images presents another challenge. To address these two challenges, we propose a novel Unsupervised SEntiment Analysis (USEA) framework for social media images. Our approach exploits relations among visual content and relevant contextual information to bridge the "semantic gap" in prediction of image sentiments. With experiments on two large-scale datasets, we show that the proposed method is effective in addressing the two challenges.

Original languageEnglish (US)
Title of host publicationIJCAI International Joint Conference on Artificial Intelligence
PublisherInternational Joint Conferences on Artificial Intelligence
Pages2378-2379
Number of pages2
Volume2015-January
ISBN (Print)9781577357384
StatePublished - 2015
Event24th International Joint Conference on Artificial Intelligence, IJCAI 2015 - Buenos Aires, Argentina
Duration: Jul 25 2015Jul 31 2015

Other

Other24th International Joint Conference on Artificial Intelligence, IJCAI 2015
CountryArgentina
CityBuenos Aires
Period7/25/157/31/15

Fingerprint

Semantics
Labels
Experiments

ASJC Scopus subject areas

  • Artificial Intelligence

Cite this

Wang, Y., Wang, S., Tang, J., Liu, H., & Li, B. (2015). Unsupervised sentiment analysis for social media images. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2015-January, pp. 2378-2379). International Joint Conferences on Artificial Intelligence.

Unsupervised sentiment analysis for social media images. / Wang, Yilin; Wang, Suhang; Tang, Jiliang; Liu, Huan; Li, Baoxin.

IJCAI International Joint Conference on Artificial Intelligence. Vol. 2015-January International Joint Conferences on Artificial Intelligence, 2015. p. 2378-2379.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Wang, Y, Wang, S, Tang, J, Liu, H & Li, B 2015, Unsupervised sentiment analysis for social media images. in IJCAI International Joint Conference on Artificial Intelligence. vol. 2015-January, International Joint Conferences on Artificial Intelligence, pp. 2378-2379, 24th International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, 7/25/15.
Wang Y, Wang S, Tang J, Liu H, Li B. Unsupervised sentiment analysis for social media images. In IJCAI International Joint Conference on Artificial Intelligence. Vol. 2015-January. International Joint Conferences on Artificial Intelligence. 2015. p. 2378-2379
Wang, Yilin ; Wang, Suhang ; Tang, Jiliang ; Liu, Huan ; Li, Baoxin. / Unsupervised sentiment analysis for social media images. IJCAI International Joint Conference on Artificial Intelligence. Vol. 2015-January International Joint Conferences on Artificial Intelligence, 2015. pp. 2378-2379
@inproceedings{8076002bfb9e4ff499a4badd2ef3be69,
title = "Unsupervised sentiment analysis for social media images",
abstract = "Recently text-based sentiment prediction has been extensively studied, while image-centric sentiment analysis receives much less attention. In this paper, we study the problem of understanding human sentiments from large-scale social media images, considering both visual content and contextual information, such as comments on the images, captions, etc. The challenge of this problem lies in the {"}semantic gap{"} between low-level visual features and higher-level image sentiments. Moreover, the lack of proper annotations/labels in the majority of social media images presents another challenge. To address these two challenges, we propose a novel Unsupervised SEntiment Analysis (USEA) framework for social media images. Our approach exploits relations among visual content and relevant contextual information to bridge the {"}semantic gap{"} in prediction of image sentiments. With experiments on two large-scale datasets, we show that the proposed method is effective in addressing the two challenges.",
author = "Yilin Wang and Suhang Wang and Jiliang Tang and Huan Liu and Baoxin Li",
year = "2015",
language = "English (US)",
isbn = "9781577357384",
volume = "2015-January",
pages = "2378--2379",
booktitle = "IJCAI International Joint Conference on Artificial Intelligence",
publisher = "International Joint Conferences on Artificial Intelligence",

}

TY - GEN

T1 - Unsupervised sentiment analysis for social media images

AU - Wang, Yilin

AU - Wang, Suhang

AU - Tang, Jiliang

AU - Liu, Huan

AU - Li, Baoxin

PY - 2015

Y1 - 2015

N2 - Recently text-based sentiment prediction has been extensively studied, while image-centric sentiment analysis receives much less attention. In this paper, we study the problem of understanding human sentiments from large-scale social media images, considering both visual content and contextual information, such as comments on the images, captions, etc. The challenge of this problem lies in the "semantic gap" between low-level visual features and higher-level image sentiments. Moreover, the lack of proper annotations/labels in the majority of social media images presents another challenge. To address these two challenges, we propose a novel Unsupervised SEntiment Analysis (USEA) framework for social media images. Our approach exploits relations among visual content and relevant contextual information to bridge the "semantic gap" in prediction of image sentiments. With experiments on two large-scale datasets, we show that the proposed method is effective in addressing the two challenges.

AB - Recently text-based sentiment prediction has been extensively studied, while image-centric sentiment analysis receives much less attention. In this paper, we study the problem of understanding human sentiments from large-scale social media images, considering both visual content and contextual information, such as comments on the images, captions, etc. The challenge of this problem lies in the "semantic gap" between low-level visual features and higher-level image sentiments. Moreover, the lack of proper annotations/labels in the majority of social media images presents another challenge. To address these two challenges, we propose a novel Unsupervised SEntiment Analysis (USEA) framework for social media images. Our approach exploits relations among visual content and relevant contextual information to bridge the "semantic gap" in prediction of image sentiments. With experiments on two large-scale datasets, we show that the proposed method is effective in addressing the two challenges.

UR - http://www.scopus.com/inward/record.url?scp=84949815121&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84949815121&partnerID=8YFLogxK

M3 - Conference contribution

SN - 9781577357384

VL - 2015-January

SP - 2378

EP - 2379

BT - IJCAI International Joint Conference on Artificial Intelligence

PB - International Joint Conferences on Artificial Intelligence

ER -