This study expands upon an existing model of students' reading comprehension ability within an intelligent tutoring system. The current system evaluates students' natural language input using a local student model. We examine the potential to expand this model by assessing the linguistic features of self-explanations aggregated across entire passages. We assessed the relationship between 126 students' reading comprehension ability and the cohesion of their aggregated self-explanations with three linguistic features. Results indicated that the three cohesion indices accounted for variance in reading ability over and above the features used in the current algorithm. These results demonstrate that broadening the window of NLP analyses can strengthen student models within ITSs.