Images in general are captured under a diverse set of conditions. An image of the same object can be captured with varied poses, illuminations, scales, backgrounds and probably different camera parameters. The task of image classification then lies in forming features of the input images in a representational space where classifiers can be better supported in spite of the above variations. Existing methods have mostly focused on obtaining features which are invariant to scale and translation, and thus they generally suffer from performance degradation on datasets which consist of images with varied poses or camera orientations. In this paper we present a new framework for image classification, which is built upon a novel way of feature extraction that generates largely affine-invariant features called affine sparse codes. This is achieved through learning a compact dictionary of features from affine-transformed input images. Analysis and experiments indicate that this novel feature is highly discriminative in addition to being largely affine-invariant. A classifier using AdaBoost is then designed using the affine sparse codes as the input. Extensive experiments with standard databases demonstrate that the proposed approach can obtain the state-of-the-art results, outperforming existing leading approaches in the literature.