TY - JOUR
T1 - Color-to-gray based on chance of happening preservation
AU - Song, Mingli
AU - Tao, Dapeng
AU - Chen, Chun
AU - Bu, Jiajun
AU - Yang, Yezhou
N1 - Funding Information:
This work was supported in part by National Natural Science Foundation of China ( 61170142 ), National Key Technology R&D Program ( 2011BAG05B04 ),the Program of International S&T Cooperation under Grant 2013DFG12840 , and the Fundamental Research Funds for the Central Universities .
PY - 2013/11/7
Y1 - 2013/11/7
N2 - It is important to convert color images into grayscale ones for both commercial and scientific applications, such as reducing the publication cost and making the color blind people capture the visual content and semantics from color images. Recently, a dozen of algorithms have been developed for color-to-gray conversion. However, none of them considers the visual attention consistency between the color image and the converted grayscale one. Therefore, these methods may fail to convey important visual information from the original color image to the converted grayscale image. Inspired by the Helmholtz principle (Desolneux et al. 2008 [16]) that "we immediately perceive whatever could not happen by chance", we propose a new algorithm for color-to-gray to solve this problem. In particular, we first define the Chance of Happening (CoH) to measure the attentional level of each pixel in a color image. Afterward, natural image statistics are introduced to estimate the CoH of each pixel. In order to preserve the CoH of the color image in the converted grayscale image, we finally cast the color-to-gray to a supervised dimension reduction problem and present locally sliced inverse regression that can be efficiently solved by singular value decomposition. Experiments on both natural images and artificial pictures suggest (1) that the proposed approach makes the CoH of the color image and that of the converted grayscale image consistent and (2) the effectiveness and the efficiency of the proposed approach by comparing with representative baseline algorithms. In addition, it requires no human-computer interactions.
AB - It is important to convert color images into grayscale ones for both commercial and scientific applications, such as reducing the publication cost and making the color blind people capture the visual content and semantics from color images. Recently, a dozen of algorithms have been developed for color-to-gray conversion. However, none of them considers the visual attention consistency between the color image and the converted grayscale one. Therefore, these methods may fail to convey important visual information from the original color image to the converted grayscale image. Inspired by the Helmholtz principle (Desolneux et al. 2008 [16]) that "we immediately perceive whatever could not happen by chance", we propose a new algorithm for color-to-gray to solve this problem. In particular, we first define the Chance of Happening (CoH) to measure the attentional level of each pixel in a color image. Afterward, natural image statistics are introduced to estimate the CoH of each pixel. In order to preserve the CoH of the color image in the converted grayscale image, we finally cast the color-to-gray to a supervised dimension reduction problem and present locally sliced inverse regression that can be efficiently solved by singular value decomposition. Experiments on both natural images and artificial pictures suggest (1) that the proposed approach makes the CoH of the color image and that of the converted grayscale image consistent and (2) the effectiveness and the efficiency of the proposed approach by comparing with representative baseline algorithms. In addition, it requires no human-computer interactions.
KW - Chance of happening (CoH)
KW - Color-to-gray (C2G)
KW - Visual attention
UR - http://www.scopus.com/inward/record.url?scp=84881546487&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84881546487&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2013.03.037
DO - 10.1016/j.neucom.2013.03.037
M3 - Article
AN - SCOPUS:84881546487
VL - 119
SP - 222
EP - 231
JO - Neurocomputing
JF - Neurocomputing
SN - 0925-2312
ER -