Abstract

The ABET accreditation process calls for feedback to be an integral part of continuous improvement of education programs. Considerable freedom is allowed in the implementation of this process and how the data is collected, quantified, and interpreted. Combining this with the naturally high variability of the education experience, the lack of unified and accepted performance metrics and outcome definitions, result in a formidable yet quite interesting feedback problem. In this study, we present the approach taken for an EE program at a large state university to formalize, quantify, and automate to the greatest possible extent the data collection, action, and evaluation of the feedback and continuous improvement process. We follow the "two loop ABET process" where the academic unit defines its own program objectives that are continuously evaluated and possibly revised by the program constituents: faculty, students, alumni, local community and industry. The evaluation of how well the program objectives are met is accomplished through regular meetings and responses to questionnaires. We quantify these responses with an adjustment of the target values of the program outcomes. Despite the fact that it is naturally abstract and vague, and some nontrivial effort must be spent on the development of the questionnaires and their correspondence with the program outcomes, the implementation of this loop is relatively straightforward. The second, and arguably more interesting part of the cycle is the assessment and evaluation of the program outcomes, and the implementation of actions and policies to affect the outcomes in a desired direction. We approach this by creating a sampling mechanism through standardized tests and questionnaires (rubrics) to quantify in a reliable manner the assessment and data collection process. The data is then used to automatically compute quantitative actions (typically expressed in instruction effort) that are to be implemented during classroom instruction and aim to minimize the difference between assessed outcomes and target outcomes. The difficulties in this process lie in several distinct planes. One is the definition of quantitative and precise metrics that reflect changes in the program. A second is the data collection and the action definitions that should minimize or, at least, allow the resolution of interdependencies and correlations among them. While these form an intellectually interesting modeling and feedback problem, one must also be prepared to accommodate some faculty resistance, indifference, or simply lack of time to perform such tasks. Viewing automation and consistency as a key for the success of continuous improvement, we have implemented this feedback process for the last four years and here we present some of our experiences.

Original languageEnglish (US)
JournalASEE Annual Conference and Exposition, Conference Proceedings
StatePublished - 2011

Fingerprint

Feedback
Education
Accreditation
Automation
Students
Sampling
Industry

ASJC Scopus subject areas

  • Engineering(all)

Cite this

@article{292bfa4136e54e558687f6b577a1bccd,
title = "On the implementation of ABET feedback for program improvement",
abstract = "The ABET accreditation process calls for feedback to be an integral part of continuous improvement of education programs. Considerable freedom is allowed in the implementation of this process and how the data is collected, quantified, and interpreted. Combining this with the naturally high variability of the education experience, the lack of unified and accepted performance metrics and outcome definitions, result in a formidable yet quite interesting feedback problem. In this study, we present the approach taken for an EE program at a large state university to formalize, quantify, and automate to the greatest possible extent the data collection, action, and evaluation of the feedback and continuous improvement process. We follow the {"}two loop ABET process{"} where the academic unit defines its own program objectives that are continuously evaluated and possibly revised by the program constituents: faculty, students, alumni, local community and industry. The evaluation of how well the program objectives are met is accomplished through regular meetings and responses to questionnaires. We quantify these responses with an adjustment of the target values of the program outcomes. Despite the fact that it is naturally abstract and vague, and some nontrivial effort must be spent on the development of the questionnaires and their correspondence with the program outcomes, the implementation of this loop is relatively straightforward. The second, and arguably more interesting part of the cycle is the assessment and evaluation of the program outcomes, and the implementation of actions and policies to affect the outcomes in a desired direction. We approach this by creating a sampling mechanism through standardized tests and questionnaires (rubrics) to quantify in a reliable manner the assessment and data collection process. The data is then used to automatically compute quantitative actions (typically expressed in instruction effort) that are to be implemented during classroom instruction and aim to minimize the difference between assessed outcomes and target outcomes. The difficulties in this process lie in several distinct planes. One is the definition of quantitative and precise metrics that reflect changes in the program. A second is the data collection and the action definitions that should minimize or, at least, allow the resolution of interdependencies and correlations among them. While these form an intellectually interesting modeling and feedback problem, one must also be prepared to accommodate some faculty resistance, indifference, or simply lack of time to perform such tasks. Viewing automation and consistency as a key for the success of continuous improvement, we have implemented this feedback process for the last four years and here we present some of our experiences.",
author = "Stephen Phillips and Konstantinos Tsakalis and Ravi Gorur and Philips, {Stephen M.}",
year = "2011",
language = "English (US)",
journal = "ASEE Annual Conference and Exposition, Conference Proceedings",
issn = "2153-5965",

}

TY - JOUR

T1 - On the implementation of ABET feedback for program improvement

AU - Phillips, Stephen

AU - Tsakalis, Konstantinos

AU - Gorur, Ravi

AU - Philips, Stephen M.

PY - 2011

Y1 - 2011

N2 - The ABET accreditation process calls for feedback to be an integral part of continuous improvement of education programs. Considerable freedom is allowed in the implementation of this process and how the data is collected, quantified, and interpreted. Combining this with the naturally high variability of the education experience, the lack of unified and accepted performance metrics and outcome definitions, result in a formidable yet quite interesting feedback problem. In this study, we present the approach taken for an EE program at a large state university to formalize, quantify, and automate to the greatest possible extent the data collection, action, and evaluation of the feedback and continuous improvement process. We follow the "two loop ABET process" where the academic unit defines its own program objectives that are continuously evaluated and possibly revised by the program constituents: faculty, students, alumni, local community and industry. The evaluation of how well the program objectives are met is accomplished through regular meetings and responses to questionnaires. We quantify these responses with an adjustment of the target values of the program outcomes. Despite the fact that it is naturally abstract and vague, and some nontrivial effort must be spent on the development of the questionnaires and their correspondence with the program outcomes, the implementation of this loop is relatively straightforward. The second, and arguably more interesting part of the cycle is the assessment and evaluation of the program outcomes, and the implementation of actions and policies to affect the outcomes in a desired direction. We approach this by creating a sampling mechanism through standardized tests and questionnaires (rubrics) to quantify in a reliable manner the assessment and data collection process. The data is then used to automatically compute quantitative actions (typically expressed in instruction effort) that are to be implemented during classroom instruction and aim to minimize the difference between assessed outcomes and target outcomes. The difficulties in this process lie in several distinct planes. One is the definition of quantitative and precise metrics that reflect changes in the program. A second is the data collection and the action definitions that should minimize or, at least, allow the resolution of interdependencies and correlations among them. While these form an intellectually interesting modeling and feedback problem, one must also be prepared to accommodate some faculty resistance, indifference, or simply lack of time to perform such tasks. Viewing automation and consistency as a key for the success of continuous improvement, we have implemented this feedback process for the last four years and here we present some of our experiences.

AB - The ABET accreditation process calls for feedback to be an integral part of continuous improvement of education programs. Considerable freedom is allowed in the implementation of this process and how the data is collected, quantified, and interpreted. Combining this with the naturally high variability of the education experience, the lack of unified and accepted performance metrics and outcome definitions, result in a formidable yet quite interesting feedback problem. In this study, we present the approach taken for an EE program at a large state university to formalize, quantify, and automate to the greatest possible extent the data collection, action, and evaluation of the feedback and continuous improvement process. We follow the "two loop ABET process" where the academic unit defines its own program objectives that are continuously evaluated and possibly revised by the program constituents: faculty, students, alumni, local community and industry. The evaluation of how well the program objectives are met is accomplished through regular meetings and responses to questionnaires. We quantify these responses with an adjustment of the target values of the program outcomes. Despite the fact that it is naturally abstract and vague, and some nontrivial effort must be spent on the development of the questionnaires and their correspondence with the program outcomes, the implementation of this loop is relatively straightforward. The second, and arguably more interesting part of the cycle is the assessment and evaluation of the program outcomes, and the implementation of actions and policies to affect the outcomes in a desired direction. We approach this by creating a sampling mechanism through standardized tests and questionnaires (rubrics) to quantify in a reliable manner the assessment and data collection process. The data is then used to automatically compute quantitative actions (typically expressed in instruction effort) that are to be implemented during classroom instruction and aim to minimize the difference between assessed outcomes and target outcomes. The difficulties in this process lie in several distinct planes. One is the definition of quantitative and precise metrics that reflect changes in the program. A second is the data collection and the action definitions that should minimize or, at least, allow the resolution of interdependencies and correlations among them. While these form an intellectually interesting modeling and feedback problem, one must also be prepared to accommodate some faculty resistance, indifference, or simply lack of time to perform such tasks. Viewing automation and consistency as a key for the success of continuous improvement, we have implemented this feedback process for the last four years and here we present some of our experiences.

UR - http://www.scopus.com/inward/record.url?scp=85029129358&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85029129358&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:85029129358

JO - ASEE Annual Conference and Exposition, Conference Proceedings

JF - ASEE Annual Conference and Exposition, Conference Proceedings

SN - 2153-5965

ER -