Abstract

Automatically manipulating facial attributes is challenging because it needs to modify the facial appearances, while keeping not only the person's identity but also the realism of the resultant images. Unlike the prior works on the facial attribute parsing, we aim at an inverse and more challenging problem called attribute manipulation by modifying a facial image in line with a reference facial attribute. Given a source input image and reference images with a target attribute, our goal is to generate a new image (i.e., target image) that not only possesses the new attribute but also keeps the same or similar content with the source image. In order to generate new facial attributes, we train a deep neural network with a combination of a perceptual content loss and two adversarial losses, which ensure the global consistency of the visual content while implementing the desired attributes often impacting on local pixels. The model automatically adjusts the visual attributes on facial appearances and keeps the edited images as realistic as possible. The evaluation shows that the proposed model can provide a unified solution to both local and global facial attribute manipulation such as expression change and hair style transfer. Moreover, we further demonstrate that the learned attribute discriminator can be used for attribute localization.

Original languageEnglish (US)
Title of host publicationProceedings - 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages112-121
Number of pages10
Volume2018-January
ISBN (Electronic)9781538648865
DOIs
StatePublished - May 3 2018
Event18th IEEE Winter Conference on Applications of Computer Vision, WACV 2018 - Lake Tahoe, United States
Duration: Mar 12 2018Mar 15 2018

Other

Other18th IEEE Winter Conference on Applications of Computer Vision, WACV 2018
CountryUnited States
CityLake Tahoe
Period3/12/183/15/18

Fingerprint

Discriminators
Pixels
Deep neural networks

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Computer Science Applications

Cite this

Wang, Y., Wang, S., Qi, G., Tang, J., & Li, B. (2018). Weakly supervised facial attribute manipulation via deep adversarial network. In Proceedings - 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018 (Vol. 2018-January, pp. 112-121). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/WACV.2018.00019

Weakly supervised facial attribute manipulation via deep adversarial network. / Wang, Yilin; Wang, Suhang; Qi, Guojun; Tang, Jiliang; Li, Baoxin.

Proceedings - 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018. Vol. 2018-January Institute of Electrical and Electronics Engineers Inc., 2018. p. 112-121.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Wang, Y, Wang, S, Qi, G, Tang, J & Li, B 2018, Weakly supervised facial attribute manipulation via deep adversarial network. in Proceedings - 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018. vol. 2018-January, Institute of Electrical and Electronics Engineers Inc., pp. 112-121, 18th IEEE Winter Conference on Applications of Computer Vision, WACV 2018, Lake Tahoe, United States, 3/12/18. https://doi.org/10.1109/WACV.2018.00019
Wang Y, Wang S, Qi G, Tang J, Li B. Weakly supervised facial attribute manipulation via deep adversarial network. In Proceedings - 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018. Vol. 2018-January. Institute of Electrical and Electronics Engineers Inc. 2018. p. 112-121 https://doi.org/10.1109/WACV.2018.00019
Wang, Yilin ; Wang, Suhang ; Qi, Guojun ; Tang, Jiliang ; Li, Baoxin. / Weakly supervised facial attribute manipulation via deep adversarial network. Proceedings - 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018. Vol. 2018-January Institute of Electrical and Electronics Engineers Inc., 2018. pp. 112-121
@inproceedings{9b2ec9eafd5a4561bd05701383ac77eb,
title = "Weakly supervised facial attribute manipulation via deep adversarial network",
abstract = "Automatically manipulating facial attributes is challenging because it needs to modify the facial appearances, while keeping not only the person's identity but also the realism of the resultant images. Unlike the prior works on the facial attribute parsing, we aim at an inverse and more challenging problem called attribute manipulation by modifying a facial image in line with a reference facial attribute. Given a source input image and reference images with a target attribute, our goal is to generate a new image (i.e., target image) that not only possesses the new attribute but also keeps the same or similar content with the source image. In order to generate new facial attributes, we train a deep neural network with a combination of a perceptual content loss and two adversarial losses, which ensure the global consistency of the visual content while implementing the desired attributes often impacting on local pixels. The model automatically adjusts the visual attributes on facial appearances and keeps the edited images as realistic as possible. The evaluation shows that the proposed model can provide a unified solution to both local and global facial attribute manipulation such as expression change and hair style transfer. Moreover, we further demonstrate that the learned attribute discriminator can be used for attribute localization.",
author = "Yilin Wang and Suhang Wang and Guojun Qi and Jiliang Tang and Baoxin Li",
year = "2018",
month = "5",
day = "3",
doi = "10.1109/WACV.2018.00019",
language = "English (US)",
volume = "2018-January",
pages = "112--121",
booktitle = "Proceedings - 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Weakly supervised facial attribute manipulation via deep adversarial network

AU - Wang, Yilin

AU - Wang, Suhang

AU - Qi, Guojun

AU - Tang, Jiliang

AU - Li, Baoxin

PY - 2018/5/3

Y1 - 2018/5/3

N2 - Automatically manipulating facial attributes is challenging because it needs to modify the facial appearances, while keeping not only the person's identity but also the realism of the resultant images. Unlike the prior works on the facial attribute parsing, we aim at an inverse and more challenging problem called attribute manipulation by modifying a facial image in line with a reference facial attribute. Given a source input image and reference images with a target attribute, our goal is to generate a new image (i.e., target image) that not only possesses the new attribute but also keeps the same or similar content with the source image. In order to generate new facial attributes, we train a deep neural network with a combination of a perceptual content loss and two adversarial losses, which ensure the global consistency of the visual content while implementing the desired attributes often impacting on local pixels. The model automatically adjusts the visual attributes on facial appearances and keeps the edited images as realistic as possible. The evaluation shows that the proposed model can provide a unified solution to both local and global facial attribute manipulation such as expression change and hair style transfer. Moreover, we further demonstrate that the learned attribute discriminator can be used for attribute localization.

AB - Automatically manipulating facial attributes is challenging because it needs to modify the facial appearances, while keeping not only the person's identity but also the realism of the resultant images. Unlike the prior works on the facial attribute parsing, we aim at an inverse and more challenging problem called attribute manipulation by modifying a facial image in line with a reference facial attribute. Given a source input image and reference images with a target attribute, our goal is to generate a new image (i.e., target image) that not only possesses the new attribute but also keeps the same or similar content with the source image. In order to generate new facial attributes, we train a deep neural network with a combination of a perceptual content loss and two adversarial losses, which ensure the global consistency of the visual content while implementing the desired attributes often impacting on local pixels. The model automatically adjusts the visual attributes on facial appearances and keeps the edited images as realistic as possible. The evaluation shows that the proposed model can provide a unified solution to both local and global facial attribute manipulation such as expression change and hair style transfer. Moreover, we further demonstrate that the learned attribute discriminator can be used for attribute localization.

UR - http://www.scopus.com/inward/record.url?scp=85050983870&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85050983870&partnerID=8YFLogxK

U2 - 10.1109/WACV.2018.00019

DO - 10.1109/WACV.2018.00019

M3 - Conference contribution

VL - 2018-January

SP - 112

EP - 121

BT - Proceedings - 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018

PB - Institute of Electrical and Electronics Engineers Inc.

ER -