File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Does Informativeness Matter? Active Learning for Educational Dialogue Act Classification

TitleDoes Informativeness Matter? Active Learning for Educational Dialogue Act Classification
Authors
KeywordsActive Learning
Dialogue Act Classification
Informativeness
Intelligent Tutoring Systems
Large Language Models
Issue Date2023
Citation
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2023, v. 13916 LNAI, p. 176-188 How to Cite?
AbstractDialogue Acts (DAs) can be used to explain what expert tutors do and what students know during the tutoring process. Most empirical studies adopt the random sampling method to obtain sentence samples for manual annotation of DAs, which are then used to train DA classifiers. However, these studies have paid little attention to sample informativeness, which can reflect the information quantity of the selected samples and inform the extent to which a classifier can learn patterns. Notably, the informativeness level may vary among the samples and the classifier might only need a small amount of low informative samples to learn the patterns. Random sampling may overlook sample informativeness, which consumes human labelling costs and contributes less to training the classifiers. As an alternative, researchers suggest employing statistical sampling methods of Active Learning (AL) to identify the informative samples for training the classifiers. However, the use of AL methods in educational DA classification tasks is under-explored. In this paper, we examine the informativeness of annotated sentence samples. Then, the study investigates how the AL methods can select informative samples to support DA classifiers in the AL sampling process. The results reveal that most annotated sentences present low informativeness in the training dataset and the patterns of these sentences can be easily captured by the DA classifier. We also demonstrate how AL methods can reduce the cost of manual annotation in the AL sampling process.
Persistent Identifierhttp://hdl.handle.net/10722/354277
ISSN
2023 SCImago Journal Rankings: 0.606
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorTan, Wei-
dc.contributor.authorLin, Jionghao-
dc.contributor.authorLang, David-
dc.contributor.authorChen, Guanliang-
dc.contributor.authorGašević, Dragan-
dc.contributor.authorDu, Lan-
dc.contributor.authorBuntine, Wray-
dc.date.accessioned2025-02-07T08:47:37Z-
dc.date.available2025-02-07T08:47:37Z-
dc.date.issued2023-
dc.identifier.citationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2023, v. 13916 LNAI, p. 176-188-
dc.identifier.issn0302-9743-
dc.identifier.urihttp://hdl.handle.net/10722/354277-
dc.description.abstractDialogue Acts (DAs) can be used to explain what expert tutors do and what students know during the tutoring process. Most empirical studies adopt the random sampling method to obtain sentence samples for manual annotation of DAs, which are then used to train DA classifiers. However, these studies have paid little attention to sample informativeness, which can reflect the information quantity of the selected samples and inform the extent to which a classifier can learn patterns. Notably, the informativeness level may vary among the samples and the classifier might only need a small amount of low informative samples to learn the patterns. Random sampling may overlook sample informativeness, which consumes human labelling costs and contributes less to training the classifiers. As an alternative, researchers suggest employing statistical sampling methods of Active Learning (AL) to identify the informative samples for training the classifiers. However, the use of AL methods in educational DA classification tasks is under-explored. In this paper, we examine the informativeness of annotated sentence samples. Then, the study investigates how the AL methods can select informative samples to support DA classifiers in the AL sampling process. The results reveal that most annotated sentences present low informativeness in the training dataset and the patterns of these sentences can be easily captured by the DA classifier. We also demonstrate how AL methods can reduce the cost of manual annotation in the AL sampling process.-
dc.languageeng-
dc.relation.ispartofLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)-
dc.subjectActive Learning-
dc.subjectDialogue Act Classification-
dc.subjectInformativeness-
dc.subjectIntelligent Tutoring Systems-
dc.subjectLarge Language Models-
dc.titleDoes Informativeness Matter? Active Learning for Educational Dialogue Act Classification-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/978-3-031-36272-9_15-
dc.identifier.scopuseid_2-s2.0-85159703753-
dc.identifier.volume13916 LNAI-
dc.identifier.spage176-
dc.identifier.epage188-
dc.identifier.eissn1611-3349-
dc.identifier.isiWOS:001321530100015-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats