An integral component of face processing research is estimation of head orientation from face images. Head pose estimation bears importance in several applications in biometrics, human-computer interfaces, driver monitoring systems, video conferencing and social interaction enhancement programs. A recent trend in head pose estimation research has been the use of manifold learning techniques to capture the underlying geometry of the images. Face images with varying pose angles can be considered to be lying on a smooth low-dimensional manifold in high-dimensional image feature space. However, with real-world images, manifold learning techniques often fail because of their reliance on a geometric structure, which is often distorted due to noise, illumination changes and other variations. Also, when there are face images of multiple individuals with varying pose angles, manifold learning techniques often do not give accurate results. In this work, we introduce the formulation of a novel framework for supervised manifold learning called Biased Manifold Embedding to obtain improved performance in person-independent head pose estimation. While this framework goes beyond pose estimation, and can be applied to all regression applications, this work is focused on formulating the framework and validating its performace using the Isomap technique for head pose estimation. The work was carried out on face images from the FacePix database, which contains 181 face images each of 30 individuals with pose angle variations at a granularity of 1°. A Generalized Regression Neural Network (GRNN) was used to learn the non-linear mapping, and linear multi-variate regression was adopted on the low-dimensional space to obtain the pose angle. Results showed that the approach holds promise, with estimation errors substantially lower than similar efforts in the past using manifold learning techniques for head pose estimation.