TY - GEN
T1 - Biased manifold embedding
T2 - 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07
AU - Balasubramanian, Vineeth Nallure
AU - Ye, Jieping
AU - Panchanathan, Sethuraman
PY - 2007
Y1 - 2007
N2 - The estimation of head pose angle from face images is an integral component of face recognition systems, human computer interfaces and other human-centered computing applications. To determine the head pose, face images with varying pose angles can be considered to be lying on a smooth low-dimensional manifold in high-dimensional feature space. While manifold learning techniques capture the geometrical relationship between data points in the high-dimensional image feature space, the pose label information of the training data samples are neglected in the computation of these embeddings. In this paper, we propose a novel supervised approach to manifold-based non-linear dimensionality reduction for head pose estimation. The Biased Manifold Embedding (BME) framework is pivoted on the ideology of using the pose angle information of the face images to compute a biased neighborhood of each point in the feature space, before determining the low-dimensional embedding. The proposed BME approach is formulated as an extensible framework, and validated with the Isomap, Locally Linear Embedding (LLE) and Laplacian Eigenmaps techniques. A Generalized Regression Neural Network (GRNN) is used to learn the non-linear mapping, and linear multi-variate regression is finally applied on the lowdimensional space to obtain the pose angle. We tested this approach on face images of 24 individuals with pose angles varying from -90°to +90 °with a granularity of 2. The results showed substantial reduction in the error of pose angle estimation, and robustness to variations in feature spaces, dimensionality of embedding and other parameters.
AB - The estimation of head pose angle from face images is an integral component of face recognition systems, human computer interfaces and other human-centered computing applications. To determine the head pose, face images with varying pose angles can be considered to be lying on a smooth low-dimensional manifold in high-dimensional feature space. While manifold learning techniques capture the geometrical relationship between data points in the high-dimensional image feature space, the pose label information of the training data samples are neglected in the computation of these embeddings. In this paper, we propose a novel supervised approach to manifold-based non-linear dimensionality reduction for head pose estimation. The Biased Manifold Embedding (BME) framework is pivoted on the ideology of using the pose angle information of the face images to compute a biased neighborhood of each point in the feature space, before determining the low-dimensional embedding. The proposed BME approach is formulated as an extensible framework, and validated with the Isomap, Locally Linear Embedding (LLE) and Laplacian Eigenmaps techniques. A Generalized Regression Neural Network (GRNN) is used to learn the non-linear mapping, and linear multi-variate regression is finally applied on the lowdimensional space to obtain the pose angle. We tested this approach on face images of 24 individuals with pose angles varying from -90°to +90 °with a granularity of 2. The results showed substantial reduction in the error of pose angle estimation, and robustness to variations in feature spaces, dimensionality of embedding and other parameters.
UR - http://www.scopus.com/inward/record.url?scp=35148892730&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=35148892730&partnerID=8YFLogxK
U2 - 10.1109/CVPR.2007.383280
DO - 10.1109/CVPR.2007.383280
M3 - Conference contribution
AN - SCOPUS:35148892730
SN - 1424411807
SN - 9781424411801
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
BT - 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07
Y2 - 17 June 2007 through 22 June 2007
ER -