Image quality assessment is one application out of many that can be aided by the use of computational saliency models. Existing visual saliency models have not been extensively tested under a quality assessment context. Also, these models are typically geared towards predicting saliency in non-distorted images. Recent work has also focussed on mimicking the human visual system in order to predict fixation points from saliency maps. One such technique (GAFFE) that uses foveation has been found to perform well for non-distorted images. This work extends the foveation framework by integrating it with saliency maps from well known saliency models. The performance of the foveated saliency models is evaluated based on a comparison with human ground-truth eye-tracking data. For comparison, the performance of the original non-foveated saliency predictions is also presented. It is shown that the integration of saliency models with a foveation based fixation finding framework significantly improves the prediction performance of existing saliency models over different distortion types. It is also found that the information maximization based saliency maps perform the best consistently over different distortion types and levels under this foveation based framework.