Publication
EDM 2018
Conference paper

Using a common sense knowledge base to auto generate multi-dimensional vocabulary assessments

Abstract

As education gets increasingly digitized, and intelligent tutoring systems gain commercial prominence, scalable assessment generation mechanisms become a critical requirement for enabling increased learning outcomes. Assessments provide a way to measure learners’ level of understanding and difficulty, and personalize their learning. There have been separate efforts in different areas to solve this by looking at different parts of the problem. This paper is a first effort to bring together techniques from diverse areas such as knowledge representation and reasoning, machine learning, inference on graphs, and pedagogy to generate automated assessments at scale. In this paper, we specifically address the problem of Multiple Choice Question (MCQ) generation for vocabulary learning assessments, specially catered to young learners (YL). We evaluate the efficacy of our approach by asking human annotators to annotate the questions generated by the system based on relevance. We also compare our approach with one baseline model and report high usability of MCQs generated by our system compared to the baseline.

Date

15 Jul 2018

Publication

EDM 2018