Abstract

The ABET accreditation process calls for feedback to be an integral part of continuous improvement of education programs. Considerable freedom is allowed in the implementation of this process and how the data is collected, quantified, and interpreted. Combining this with the naturally high variability of the education experience, the lack of unified and accepted performance metrics and outcome definitions, result in a formidable yet quite interesting feedback problem. In this study, we present the approach taken for an EE program at a large state university to formalize, quantify, and automate to the greatest possible extent the data collection, action, and evaluation of the feedback and continuous improvement process. We follow the "two loop ABET process" where the academic unit defines its own program objectives that are continuously evaluated and possibly revised by the program constituents: faculty, students, alumni, local community and industry. The evaluation of how well the program objectives are met is accomplished through regular meetings and responses to questionnaires. We quantify these responses with an adjustment of the target values of the program outcomes. Despite the fact that it is naturally abstract and vague, and some nontrivial effort must be spent on the development of the questionnaires and their correspondence with the program outcomes, the implementation of this loop is relatively straightforward. The second, and arguably more interesting part of the cycle is the assessment and evaluation of the program outcomes, and the implementation of actions and policies to affect the outcomes in a desired direction. We approach this by creating a sampling mechanism through standardized tests and questionnaires (rubrics) to quantify in a reliable manner the assessment and data collection process. The data is then used to automatically compute quantitative actions (typically expressed in instruction effort) that are to be implemented during classroom instruction and aim to minimize the difference between assessed outcomes and target outcomes. The difficulties in this process lie in several distinct planes. One is the definition of quantitative and precise metrics that reflect changes in the program. A second is the data collection and the action definitions that should minimize or, at least, allow the resolution of interdependencies and correlations among them. While these form an intellectually interesting modeling and feedback problem, one must also be prepared to accommodate some faculty resistance, indifference, or simply lack of time to perform such tasks. Viewing automation and consistency as a key for the success of continuous improvement, we have implemented this feedback process for the last four years and here we present some of our experiences.

Original languageEnglish (US)
JournalASEE Annual Conference and Exposition, Conference Proceedings
StatePublished - 2011

ASJC Scopus subject areas

  • General Engineering

Fingerprint

Dive into the research topics of 'On the implementation of ABET feedback for program improvement'. Together they form a unique fingerprint.

Cite this