What’s more important when developing math recommender systems: accuracy, explainability, or both?
Main Article Content
Abstract
To make accurate predictions, complex artificial intelligence techniques are being adopted in intelligent systems. It leads to the need for explanations, helping users understand how the model works. Beyond this original purpose, explanations in educational intelligent systems have been found to increase students’ awareness, perceived usefulness, and acceptance of the recommendations. Can we ensure a model makes accurate predictions and has effects of explanations at the same time? Though it is commonly considered that complex models are accurate but difficult to interpret, it remains debatable whether there is a trade-off between the accuracy and explainability of such models. In this study, we explore the relationships between accuracy and explainability of different models for recommending math quizzes in the context of formative assessment. Focusing on three recommender models—an inherently explainable model (Naïve CE), a black-box model (MF), and an integrated model (CE+MF), we compared the accuracy using a large-scale real-world dataset and evaluated the explanations in a semi-interactive questionnaire survey. We found that: 1) There was a trade-off between accuracy and explainability given the specific context. 2) The explainability did not demonstrate consistent trends among different aspects. Especially, perceived understandability did not indicate the perceived usefulness in math learning and the behavioral intention to use the system. 3) The integrated model displayed a balanced level of accuracy and explainability, which implies the feasibility to develop an explainable educational recommender system by improving the accuracy of an inherently explainable model.
Metrics
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.