TY - GEN
T1 - Transductive Unbiased Embedding for Zero-Shot Learning
AU - Song, Jie
AU - Shen, Chengchao
AU - Yang, Yezhou
AU - Liu, Yang
AU - Song, Mingli
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/12/14
Y1 - 2018/12/14
N2 - Most existing Zero-Shot Learning (ZSL) methods have the strong bias problem, in which instances of unseen (target) classes tend to be categorized as one of the seen (source) classes. So they yield poor performance after being deployed in the generalized ZSL settings. In this paper, we propose a straightforward yet effective method named Quasi-Fully Supervised Learning (QFSL) to alleviate the bias problem. Our method follows the way of transductive learning, which assumes that both the labeled source images and unlabeled target images are available for training. In the semantic embedding space, the labeled source images are mapped to several fixed points specified by the source categories, and the unlabeled target images are forced to be mapped to other points specified by the target categories. Experiments conducted on AwA2, CUB and SUN datasets demonstrate that our method outperforms existing state-of-the-art approaches by a huge margin of 9.3 ~ 24.5% following generalized ZSL settings, and by a large margin of 0.2 ~ 16.2% following conventional ZSL settings.
AB - Most existing Zero-Shot Learning (ZSL) methods have the strong bias problem, in which instances of unseen (target) classes tend to be categorized as one of the seen (source) classes. So they yield poor performance after being deployed in the generalized ZSL settings. In this paper, we propose a straightforward yet effective method named Quasi-Fully Supervised Learning (QFSL) to alleviate the bias problem. Our method follows the way of transductive learning, which assumes that both the labeled source images and unlabeled target images are available for training. In the semantic embedding space, the labeled source images are mapped to several fixed points specified by the source categories, and the unlabeled target images are forced to be mapped to other points specified by the target categories. Experiments conducted on AwA2, CUB and SUN datasets demonstrate that our method outperforms existing state-of-the-art approaches by a huge margin of 9.3 ~ 24.5% following generalized ZSL settings, and by a large margin of 0.2 ~ 16.2% following conventional ZSL settings.
UR - http://www.scopus.com/inward/record.url?scp=85055116030&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85055116030&partnerID=8YFLogxK
U2 - 10.1109/CVPR.2018.00113
DO - 10.1109/CVPR.2018.00113
M3 - Conference contribution
AN - SCOPUS:85055116030
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
SP - 1024
EP - 1033
BT - Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018
PB - IEEE Computer Society
T2 - 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018
Y2 - 18 June 2018 through 22 June 2018
ER -