This paper describes two affect-sensitive variants of an existing intelligent tutoring system called AutoTutor. The new versions of AutoTutor detect learners' boredom, confusion, and frustration by monitoring conversational cues, gross body language, and facial features. The sensed cognitive-affective states are used to select AutoTutor's pedagogical and motivational dialogue moves and to drive the behavior of an embodied pedagogical agent that expresses emotions through verbal content, facial expressions, and affective speech. The first version, called the Supportive AutoTutor, addresses the presence of the negative states by providing empathetic and encouraging responses. The Supportive AutoTutor attributes the source of the learners' emotions to the material or itself, but never directly to the learner. In contrast, the second version, called the Shakeup AutoTutor, takes students to task by directly attributing the source of the emotions to the learners themselves and responding with witty, skeptical, and enthusiastic responses. This paper provides an overview of our theoretical framework, and the design of the Supportive and Shakeup tutors.