Abstract
Visual saliency models have recently begun to incorporate deep learning to achieve predictive capacity much greater than previous unsupervised methods. However, most existing models predict saliency without explicit knowledge of global scene semantic information. We propose a model (MxSalNet) that incorporates global scene semantic information in addition to local information gathered by a convolutional neural network. Our model is formulated as a mixture of experts. Each expert network is trained to predict saliency for a set of closely related images. The final saliency map is computed as a weighted mixture of the expert networks' output, with weights determined by a separate gating network. This gating network is guided by global scene information to predict weights. The expert networks and the gating network are trained simultaneously in an end-to-end manner. We show that our mixture formulation leads to improvement in performance over an otherwise identical non-mixture model that does not incorporate global scene information. Additionally, we show that our model achieves better performance than several other visual saliency models.
Original language | English (US) |
---|---|
Pages (from-to) | 4080-4090 |
Number of pages | 11 |
Journal | IEEE Transactions on Image Processing |
Volume | 27 |
Issue number | 8 |
DOIs | |
State | Published - Aug 2018 |
Keywords
- Visual attention
- deep learning
- human visual system
- saliency map
ASJC Scopus subject areas
- Software
- Computer Graphics and Computer-Aided Design