Can automated item generation be used to develop high quality MCQs that assess application of knowledge?

Main Article Content

Debra Pugh
André De Champlain
Mark Gierl
Hollis Lai
Claire Touchie

Abstract

The purpose of this study was to compare the quality of multiple choice questions (MCQs) developed using automated item generation (AIG) versus traditional methods, as judged by a panel of experts. The quality of MCQs developed using two methods (i.e., AIG or traditional) was evaluated by a panel of content experts in a blinded study. Participants rated a total of 102 MCQs using six quality metrics and made a judgment regarding whether or not each item tested recall or application of knowledge. A Wilcoxon two-sample test evaluated differences in each of the six quality metrics rating scales as well as an overall cognitive domain judgment. No significant differences were found in terms of item quality or cognitive domain assessed when comparing the two item development methods. The vast majority of items (> 90%) developed using both methods were deemed to be assessing higher-order skills. When compared to traditionally developed items, MCQs developed using AIG demonstrated comparable quality. Both modalities can produce items that assess higher-order cognitive skills.

Metrics

Metrics Loading ...

Article Details

How to Cite
Pugh, D., Champlain, A. D., Gierl, M., Lai, H., & Touchie, C. (2020). Can automated item generation be used to develop high quality MCQs that assess application of knowledge?. Research and Practice in Technology Enhanced Learning, 15. Retrieved from https://rptel.apsce.net/index.php/RPTEL/article/view/2020-15012
Section
Articles