File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Does Multiple Choice Have a Future in the Age of Generative AI? A Posttest-only RCT

TitleDoes Multiple Choice Have a Future in the Age of Generative AI? A Posttest-only RCT
Authors
KeywordsAI-assisted tutoring
Assessment
Generative AI
Human-AI tutoring
Tutoring
Issue Date3-Mar-2025
Abstract

The role of multiple-choice questions (MCQs) as effective learning tools has been debated in past research. While MCQs are widely used due to their ease in grading, open response questions are increasingly used for instruction, given advances in large language models (LLMs) for automated grading. This study evaluates MCQs effectiveness relative to open-response questions, both individually and in combination, on learning. These activities are embedded within six tutor lessons on advocacy. Using a posttest-only randomized control design, we compare the performance of 234 tutors (790 lesson completions) across three conditions: MCQ only, open response only, and a combination of both. We find no significant learning differences across conditions at posttest, but tutors in the MCQ condition took significantly less time to complete instruction. These findings suggest that MCQs are as effective, and more efficient, than open response tasks for learning when practice time is limited. To further enhance efficiency, we autograded open responses using GPT-4o and GPT-4-turbo. GPT models demonstrate proficiency for purposes of low-stakes assessment, though further research is needed for broader use. This study contributes a dataset of lesson log data, human annotation rubrics, and LLM prompts to promote transparency and reproducibility.


Persistent Identifierhttp://hdl.handle.net/10722/358757

 

DC FieldValueLanguage
dc.contributor.authorThomas, R. Danielle-
dc.contributor.authorBorchers, Conrad-
dc.contributor.authorKakarla, Sanjit-
dc.contributor.authorLin, Jionghao-
dc.contributor.authorBhushan, Shambhavi-
dc.contributor.authorGuo, Boyuan-
dc.contributor.authorGatz, Erin-
dc.contributor.authorKoedinger, R. Kenneth-
dc.date.accessioned2025-08-13T07:47:50Z-
dc.date.available2025-08-13T07:47:50Z-
dc.date.issued2025-03-03-
dc.identifier.urihttp://hdl.handle.net/10722/358757-
dc.description.abstract<p>The role of multiple-choice questions (MCQs) as effective learning tools has been debated in past research. While MCQs are widely used due to their ease in grading, open response questions are increasingly used for instruction, given advances in large language models (LLMs) for automated grading. This study evaluates MCQs effectiveness relative to open-response questions, both individually and in combination, on learning. These activities are embedded within six tutor lessons on advocacy. Using a posttest-only randomized control design, we compare the performance of 234 tutors (790 lesson completions) across three conditions: MCQ only, open response only, and a combination of both. We find no significant learning differences across conditions at posttest, but tutors in the MCQ condition took significantly less time to complete instruction. These findings suggest that MCQs are as effective, and more efficient, than open response tasks for learning when practice time is limited. To further enhance efficiency, we autograded open responses using GPT-4o and GPT-4-turbo. GPT models demonstrate proficiency for purposes of low-stakes assessment, though further research is needed for broader use. This study contributes a dataset of lesson log data, human annotation rubrics, and LLM prompts to promote transparency and reproducibility.</p>-
dc.languageeng-
dc.relation.ispartof15th International Learning Analytics and Knowledge Conference (LAK 2025) (03/03/2025-07/03/2025, Dublin)-
dc.subjectAI-assisted tutoring-
dc.subjectAssessment-
dc.subjectGenerative AI-
dc.subjectHuman-AI tutoring-
dc.subjectTutoring-
dc.titleDoes Multiple Choice Have a Future in the Age of Generative AI? A Posttest-only RCT-
dc.typeConference_Paper-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.1145/3706468.3706530-
dc.identifier.scopuseid_2-s2.0-105000348823-
dc.identifier.spage494-
dc.identifier.epage504-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats