Algorithms for spectral subtraction suffer from musical noise effects due to the large gaps in the frequency spectrum created by the subtractive process. Proposed methods to solve this problem used the auditory-masking model in the Wiener filter. Since the auditory-masking threshold (AMT) curve reveals that spectral components above it are perceptible, it can serve as a lower bound in the estimate of the short-term speech spectrum. We propose an improvement of the Wiener filter estimate using perceptual constraints that exploit the auditory masking curve. Using an LPC model, from psychoacoustics we derive an estimate of the spectral density of speech that tends to lower and spread the energy of the musical noise onto other frequencies in the critical band. Objective and subjective evaluations indicate a slightly improved performance over ordinary spectral subtraction and Wiener filtering methods.