Bayesian tactile face

Zheshen Wang, Xinyu Xu, Baoxin Li

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Citations (Scopus)

Abstract

Computer users with visual impairment cannot access the rich graphical contents in print or digital media unless relying on visual-to-tactile conversion, which is done primarily by human specialists. Automated approaches to this conversion are an emerging research field, in which currently only simple graphics such as diagrams are handled. This paper proposes a systematic method for automatically converting a human portrait image into its tactile form. We model the face based on deformable Active Shape Model (ASM)[4], which is enriched by local appearance models in terms of gradient profiles along the shape. The generic face model including the appearance components is learnt from a set of training face images. Given a new portrait image, the prior model is updated through Bayesian inference. To facilitate the incorporation of a pose-dependent appearance model, we propose a statistical sampling scheme for the inference task. Furthermore, to compensate for the simplicity of the face model, edge segments of a given image are used to enrich the basic face model in generating the final tactile printout. Experiments are designed to evaluate the performance of the proposed method.

Original languageEnglish (US)
Title of host publication26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
DOIs
StatePublished - 2008
Event26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR - Anchorage, AK, United States
Duration: Jun 23 2008Jun 28 2008

Other

Other26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
CountryUnited States
CityAnchorage, AK
Period6/23/086/28/08

Fingerprint

Digital storage
Sampling
Experiments

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Control and Systems Engineering

Cite this

Wang, Z., Xu, X., & Li, B. (2008). Bayesian tactile face. In 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR [4587374] https://doi.org/10.1109/CVPR.2008.4587374

Bayesian tactile face. / Wang, Zheshen; Xu, Xinyu; Li, Baoxin.

26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR. 2008. 4587374.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Wang, Z, Xu, X & Li, B 2008, Bayesian tactile face. in 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR., 4587374, 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Anchorage, AK, United States, 6/23/08. https://doi.org/10.1109/CVPR.2008.4587374
Wang Z, Xu X, Li B. Bayesian tactile face. In 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR. 2008. 4587374 https://doi.org/10.1109/CVPR.2008.4587374
Wang, Zheshen ; Xu, Xinyu ; Li, Baoxin. / Bayesian tactile face. 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR. 2008.
@inproceedings{116ab355c3b24784bcaf9cf46efd8977,
title = "Bayesian tactile face",
abstract = "Computer users with visual impairment cannot access the rich graphical contents in print or digital media unless relying on visual-to-tactile conversion, which is done primarily by human specialists. Automated approaches to this conversion are an emerging research field, in which currently only simple graphics such as diagrams are handled. This paper proposes a systematic method for automatically converting a human portrait image into its tactile form. We model the face based on deformable Active Shape Model (ASM)[4], which is enriched by local appearance models in terms of gradient profiles along the shape. The generic face model including the appearance components is learnt from a set of training face images. Given a new portrait image, the prior model is updated through Bayesian inference. To facilitate the incorporation of a pose-dependent appearance model, we propose a statistical sampling scheme for the inference task. Furthermore, to compensate for the simplicity of the face model, edge segments of a given image are used to enrich the basic face model in generating the final tactile printout. Experiments are designed to evaluate the performance of the proposed method.",
author = "Zheshen Wang and Xinyu Xu and Baoxin Li",
year = "2008",
doi = "10.1109/CVPR.2008.4587374",
language = "English (US)",
isbn = "9781424422432",
booktitle = "26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR",

}

TY - GEN

T1 - Bayesian tactile face

AU - Wang, Zheshen

AU - Xu, Xinyu

AU - Li, Baoxin

PY - 2008

Y1 - 2008

N2 - Computer users with visual impairment cannot access the rich graphical contents in print or digital media unless relying on visual-to-tactile conversion, which is done primarily by human specialists. Automated approaches to this conversion are an emerging research field, in which currently only simple graphics such as diagrams are handled. This paper proposes a systematic method for automatically converting a human portrait image into its tactile form. We model the face based on deformable Active Shape Model (ASM)[4], which is enriched by local appearance models in terms of gradient profiles along the shape. The generic face model including the appearance components is learnt from a set of training face images. Given a new portrait image, the prior model is updated through Bayesian inference. To facilitate the incorporation of a pose-dependent appearance model, we propose a statistical sampling scheme for the inference task. Furthermore, to compensate for the simplicity of the face model, edge segments of a given image are used to enrich the basic face model in generating the final tactile printout. Experiments are designed to evaluate the performance of the proposed method.

AB - Computer users with visual impairment cannot access the rich graphical contents in print or digital media unless relying on visual-to-tactile conversion, which is done primarily by human specialists. Automated approaches to this conversion are an emerging research field, in which currently only simple graphics such as diagrams are handled. This paper proposes a systematic method for automatically converting a human portrait image into its tactile form. We model the face based on deformable Active Shape Model (ASM)[4], which is enriched by local appearance models in terms of gradient profiles along the shape. The generic face model including the appearance components is learnt from a set of training face images. Given a new portrait image, the prior model is updated through Bayesian inference. To facilitate the incorporation of a pose-dependent appearance model, we propose a statistical sampling scheme for the inference task. Furthermore, to compensate for the simplicity of the face model, edge segments of a given image are used to enrich the basic face model in generating the final tactile printout. Experiments are designed to evaluate the performance of the proposed method.

UR - http://www.scopus.com/inward/record.url?scp=51949094606&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=51949094606&partnerID=8YFLogxK

U2 - 10.1109/CVPR.2008.4587374

DO - 10.1109/CVPR.2008.4587374

M3 - Conference contribution

AN - SCOPUS:51949094606

SN - 9781424422432

BT - 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR

ER -