Evaluation of automatically generated English vocabulary questions

Main Article Content

Yuni Susanti
Takenobu Tokunaga
Hitoshi Nishikawa
Hiroyuki Obari


This paper describes details of the evaluation experiments for questions created by an automatic question generation system. Given a target word and one of its word senses, the system generates a multiple-choice English vocabulary question asking for the closest in meaning to the target word in the reading passage. Two kinds of evaluation were conducted considering two aspects: (1) measuring English learners’ proficiency and (2) their similarity to the human-made questions. The first evaluation is based on the responses from English learners obtained through administering the machine-generated and human-made questions to them, and the second is based on the subjective judgement by English teachers. Both evaluations showed that the machine-generated questions were able to achieve a comparable level with the human-made questions in both measuring English proficiency and similarity.


Metrics Loading ...

Article Details

How to Cite
Susanti, Y., Tokunaga, T., Nishikawa, H., & Obari, H. . (2017). Evaluation of automatically generated English vocabulary questions. Research and Practice in Technology Enhanced Learning, 12. Retrieved from https://rptel.apsce.net/index.php/RPTEL/article/view/2017-12011