Detecting students-at-risk in computer programming classes with learning analytics from students’ digital footprints

David Azcona, Ihan Hsiao, Alan F. Smeaton

Research output: Contribution to journalArticle

Abstract

Different sources of data about students, ranging from static demographics to dynamic behavior logs, can be harnessed from a variety sources at Higher Education Institutions. Combining these assembles a rich digital footprint for students, which can enable institutions to better understand student behaviour and to better prepare for guiding students towards reaching their academic potential. This paper presents a new research methodology to automatically detect students “at-risk” of failing an assignment in computer programming modules (courses) and to simultaneously support adaptive feedback. By leveraging historical student data, we built predictive models using students’ offline (static) information including student characteristics and demographics, and online (dynamic) resources using programming and behaviour activity logs. Predictions are generated weekly during semester. Overall, the predictive and personalised feedback helped to reduce the gap between the lower and higher-performing students. Furthermore, students praised the prediction and the personalised feedback, conveying strong recommendations for future students to use the system. We also found that students who followed their personalised guidance and recommendations performed better in examinations.

Original languageEnglish (US)
JournalUser Modeling and User-Adapted Interaction
DOIs
StatePublished - Jan 1 2019
Externally publishedYes

Fingerprint

Computer programming
programming
Students
learning
student
Feedback
predictive model
Conveying
semester
Education
examination

Keywords

  • Computer Science Education
  • Educational data mining
  • Learning analytics
  • Machine learning
  • Peer learning
  • Predictive modelling

ASJC Scopus subject areas

  • Education
  • Human-Computer Interaction
  • Computer Science Applications

Cite this

@article{8538123488db4aa0be9581a040551353,
title = "Detecting students-at-risk in computer programming classes with learning analytics from students’ digital footprints",
abstract = "Different sources of data about students, ranging from static demographics to dynamic behavior logs, can be harnessed from a variety sources at Higher Education Institutions. Combining these assembles a rich digital footprint for students, which can enable institutions to better understand student behaviour and to better prepare for guiding students towards reaching their academic potential. This paper presents a new research methodology to automatically detect students “at-risk” of failing an assignment in computer programming modules (courses) and to simultaneously support adaptive feedback. By leveraging historical student data, we built predictive models using students’ offline (static) information including student characteristics and demographics, and online (dynamic) resources using programming and behaviour activity logs. Predictions are generated weekly during semester. Overall, the predictive and personalised feedback helped to reduce the gap between the lower and higher-performing students. Furthermore, students praised the prediction and the personalised feedback, conveying strong recommendations for future students to use the system. We also found that students who followed their personalised guidance and recommendations performed better in examinations.",
keywords = "Computer Science Education, Educational data mining, Learning analytics, Machine learning, Peer learning, Predictive modelling",
author = "David Azcona and Ihan Hsiao and Smeaton, {Alan F.}",
year = "2019",
month = "1",
day = "1",
doi = "10.1007/s11257-019-09234-7",
language = "English (US)",
journal = "User Modeling and User-Adapted Interaction",
issn = "0924-1868",
publisher = "Springer Netherlands",

}

TY - JOUR

T1 - Detecting students-at-risk in computer programming classes with learning analytics from students’ digital footprints

AU - Azcona, David

AU - Hsiao, Ihan

AU - Smeaton, Alan F.

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Different sources of data about students, ranging from static demographics to dynamic behavior logs, can be harnessed from a variety sources at Higher Education Institutions. Combining these assembles a rich digital footprint for students, which can enable institutions to better understand student behaviour and to better prepare for guiding students towards reaching their academic potential. This paper presents a new research methodology to automatically detect students “at-risk” of failing an assignment in computer programming modules (courses) and to simultaneously support adaptive feedback. By leveraging historical student data, we built predictive models using students’ offline (static) information including student characteristics and demographics, and online (dynamic) resources using programming and behaviour activity logs. Predictions are generated weekly during semester. Overall, the predictive and personalised feedback helped to reduce the gap between the lower and higher-performing students. Furthermore, students praised the prediction and the personalised feedback, conveying strong recommendations for future students to use the system. We also found that students who followed their personalised guidance and recommendations performed better in examinations.

AB - Different sources of data about students, ranging from static demographics to dynamic behavior logs, can be harnessed from a variety sources at Higher Education Institutions. Combining these assembles a rich digital footprint for students, which can enable institutions to better understand student behaviour and to better prepare for guiding students towards reaching their academic potential. This paper presents a new research methodology to automatically detect students “at-risk” of failing an assignment in computer programming modules (courses) and to simultaneously support adaptive feedback. By leveraging historical student data, we built predictive models using students’ offline (static) information including student characteristics and demographics, and online (dynamic) resources using programming and behaviour activity logs. Predictions are generated weekly during semester. Overall, the predictive and personalised feedback helped to reduce the gap between the lower and higher-performing students. Furthermore, students praised the prediction and the personalised feedback, conveying strong recommendations for future students to use the system. We also found that students who followed their personalised guidance and recommendations performed better in examinations.

KW - Computer Science Education

KW - Educational data mining

KW - Learning analytics

KW - Machine learning

KW - Peer learning

KW - Predictive modelling

UR - http://www.scopus.com/inward/record.url?scp=85065133188&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85065133188&partnerID=8YFLogxK

U2 - 10.1007/s11257-019-09234-7

DO - 10.1007/s11257-019-09234-7

M3 - Article

JO - User Modeling and User-Adapted Interaction

JF - User Modeling and User-Adapted Interaction

SN - 0924-1868

ER -