This Innovative Practice Full Paper presents the experience of using formative data produced by an autograder tool to understand how students complete assignments in a second year computing course. Autograding has increasingly been used in computing courses to evaluate student work. Using autograders supports the scalability of classes and provides other advantages such as accurate assessment, reproducibility over semesters, and rapid feedback. As a consequence of using an autograder, students produce a trace of their problem-solving process while completing assignments. In contrast to the summative assessment performed on a final submission, this trace captures the formative steps that were involved in the development of the final submission. Analyzing this trace can provide insights into student problem-solving methodology as well as the structure of the problems being assigned, and the impact of classroom interventions during an assignment. To produce this view into student learning, we have developed an autograder with the initial goal of improving student instruction. It uses a combination of static and dynamic analysis techniques to support accurate assessment and the ability to check for more complex requirements than can be captured by approaches that compare program output with a ground truth. The tool has been used in a sophomore level course on data structures & algorithms. This course uses the Java programming language and introduces topics such as lists, searching, sorting, binary search trees, priority queues, hash tables, and graphs. Our current analysis work focuses on analyzing traces produced by students to identify difficult parts of an assignment (as represented by grading regressions), and the path through possible solution states that students take as they complete an assignment. In this paper, we discuss the design of this tool, what insights the data captures provide into student learning, and give ideas for future directions supported by gathering formative assessment data.