ROBUSTNESS OF NUMBER RIGHT ELIMINATION TESTING (NRET) SCORING METHOD FOR MULTIPLE-CHOICE ITEMS IN COMPUTERADAPTIVE ASSESSMENT SYSTEM (CAAS)
Main Article Content
Abstract
This paper compares the robustness of the Number Right Elimination Testing (NRET) scoring method for multiple-choice items in Computer-Adaptive Assessment System (CAAS) with two existing scoring methods: Number Right (NR) and Elimination Testing (ET). The NRET scoring method is more reflective of the reality at a workplace that credits partial knowledge and penalizes guessing and detects misconceptions. Quasi-experimental research design was employed where error due to scoring was the prime focus and the scoring method was the main manipulated variable. A total of 449 Form Two students in 19 Malaysian secondary schools participated in the study. The robustness of the NRET method was evaluated twice; one using mathematics items and another using science items. In addition, students’ perceptions of NRET and CAAS were also studied and discussed. The results showed that the NRET method is more efficient in estimating students’ ability with the NRET scores having higher reliability and lower Standard Error of Measurement. Furthermore, the results showed that the test length could be shortened while retaining the desired reliability if the NRET method was used. They also showed that the NRET scores were similar to the ET scores but were different from the NR scores. The findings on students’ perceptions of the NRET method rated it as a practical scoring method and indications of students’ willingness to use CAAS if it was available.
Metrics
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.