This paper presents a novel method for static gesture recognition based on visual attention. Our proposed method makes use of a visual attention model to automatically select points that correspond to fixation points of the human eye. Gesture recognition is then performed using the determined visual attention fixation points. For this purpose, shape context descriptors are used to compare the sparse fixation points of gestures for classification. Simulation results are presented in order to illustrate the performance of the proposed perceptual-based attentive gesture recognition method. The proposed method not only helps in the development of more natural user-centric interactive interfaces but is also able to achieve a 96.42% classification accuracy on the Triesch database of hand postures, which is superior to other methods presented in the literature.