Abstract
Portrait photos (facial images) play important social and emotional roles in our life. This type of visual media is unfortunately inaccessible by users with visual impairment. This paper proposes a systematic approach for automatically converting human facial images into a tactile form that can be printed on a tactile printer and explored by a user who is blind. We propose a deformable Bayesian Active Shape Model (BASM), which integrates anthropometric priors with shape and appearance information learnt from a face dataset. We design an inference algorithm under this model for processing new face images to create an input-adaptive face sketch. Further, the model is enhanced by input-specific details through semantic-aware processing. We report experiments on evaluating the accuracy of face alignment using the proposed method, with comparison with other state-of-the-art results. Furthermore, subjective evaluations of the produced tactile face images were performed by 17 persons including six visually-impaired users, confirming the effectiveness of the proposed approach in conveying via haptics vital visual information in a face image.
Original language | English (US) |
---|---|
Article number | 5437233 |
Pages (from-to) | 233-246 |
Number of pages | 14 |
Journal | IEEE Transactions on Multimedia |
Volume | 12 |
Issue number | 4 |
DOIs | |
State | Published - Jun 2010 |
Keywords
- Image matching
- Image shape analysis
- Pattern recognition
- Tactile graphics
ASJC Scopus subject areas
- Signal Processing
- Media Technology
- Computer Science Applications
- Electrical and Electronic Engineering