This project is an effort to explore the integration of this fine-grained computer-collected data (FGCC), with a range of other types of data streams, of varying sources, modalities, and time scales in order to build a complete picture of the learner. We will develop methods for integrating these different forms of data, and we will explore their differential contribution. We propose to conduct our investigations in the context of classroom enactments that make use of two computer-supported learning environments, ChemVLab and the Carnegie Mellon Algebra Tutor. Our studies will make use of log file data of the sort that has been previously studied. But we will augment this FGCC data with data from a range of other modalities, in order to build a more complete picture of individual students, as well as of entire classes. The proposed project has two main aims. 1. Map the limits of FGCC data. First, we will answer the question: What does FGCC data capture and what does it miss? To answer this question, we will look first at the ability of FGCC data to predict a wide variety of learning outcomes captured by a range of measures. Second, using data from the multiple modalities, we will build a set of learning narratives for a subset of the students observed. We will then compare these narratives to the pictures provided by the FGCC data alone. 2. Explore the integration of FGCC data with data from other modalities. Second, we will explore the possibility of combining FGCC data with data from other modalities, in a manner that makes it possible to fluidly mine the full corpus of data. We will then map the limits of automated analysis of this enriched multi-modal data, just as we did for the FGCC alone. We will ask: Can we better predict outcomes? Can we capture more of the full learning narrative?
|Effective start/end date||9/1/14 → 8/31/17|
- National Science Foundation (NSF): $175,893.00