File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/978-3-031-36272-9_15
- Scopus: eid_2-s2.0-85159703753
- WOS: WOS:001321530100015
- Find via

Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Does Informativeness Matter? Active Learning for Educational Dialogue Act Classification
| Title | Does Informativeness Matter? Active Learning for Educational Dialogue Act Classification |
|---|---|
| Authors | |
| Keywords | Active Learning Dialogue Act Classification Informativeness Intelligent Tutoring Systems Large Language Models |
| Issue Date | 2023 |
| Citation | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2023, v. 13916 LNAI, p. 176-188 How to Cite? |
| Abstract | Dialogue Acts (DAs) can be used to explain what expert tutors do and what students know during the tutoring process. Most empirical studies adopt the random sampling method to obtain sentence samples for manual annotation of DAs, which are then used to train DA classifiers. However, these studies have paid little attention to sample informativeness, which can reflect the information quantity of the selected samples and inform the extent to which a classifier can learn patterns. Notably, the informativeness level may vary among the samples and the classifier might only need a small amount of low informative samples to learn the patterns. Random sampling may overlook sample informativeness, which consumes human labelling costs and contributes less to training the classifiers. As an alternative, researchers suggest employing statistical sampling methods of Active Learning (AL) to identify the informative samples for training the classifiers. However, the use of AL methods in educational DA classification tasks is under-explored. In this paper, we examine the informativeness of annotated sentence samples. Then, the study investigates how the AL methods can select informative samples to support DA classifiers in the AL sampling process. The results reveal that most annotated sentences present low informativeness in the training dataset and the patterns of these sentences can be easily captured by the DA classifier. We also demonstrate how AL methods can reduce the cost of manual annotation in the AL sampling process. |
| Persistent Identifier | http://hdl.handle.net/10722/354277 |
| ISSN | 2023 SCImago Journal Rankings: 0.606 |
| ISI Accession Number ID |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Tan, Wei | - |
| dc.contributor.author | Lin, Jionghao | - |
| dc.contributor.author | Lang, David | - |
| dc.contributor.author | Chen, Guanliang | - |
| dc.contributor.author | Gašević, Dragan | - |
| dc.contributor.author | Du, Lan | - |
| dc.contributor.author | Buntine, Wray | - |
| dc.date.accessioned | 2025-02-07T08:47:37Z | - |
| dc.date.available | 2025-02-07T08:47:37Z | - |
| dc.date.issued | 2023 | - |
| dc.identifier.citation | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2023, v. 13916 LNAI, p. 176-188 | - |
| dc.identifier.issn | 0302-9743 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/354277 | - |
| dc.description.abstract | Dialogue Acts (DAs) can be used to explain what expert tutors do and what students know during the tutoring process. Most empirical studies adopt the random sampling method to obtain sentence samples for manual annotation of DAs, which are then used to train DA classifiers. However, these studies have paid little attention to sample informativeness, which can reflect the information quantity of the selected samples and inform the extent to which a classifier can learn patterns. Notably, the informativeness level may vary among the samples and the classifier might only need a small amount of low informative samples to learn the patterns. Random sampling may overlook sample informativeness, which consumes human labelling costs and contributes less to training the classifiers. As an alternative, researchers suggest employing statistical sampling methods of Active Learning (AL) to identify the informative samples for training the classifiers. However, the use of AL methods in educational DA classification tasks is under-explored. In this paper, we examine the informativeness of annotated sentence samples. Then, the study investigates how the AL methods can select informative samples to support DA classifiers in the AL sampling process. The results reveal that most annotated sentences present low informativeness in the training dataset and the patterns of these sentences can be easily captured by the DA classifier. We also demonstrate how AL methods can reduce the cost of manual annotation in the AL sampling process. | - |
| dc.language | eng | - |
| dc.relation.ispartof | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | - |
| dc.subject | Active Learning | - |
| dc.subject | Dialogue Act Classification | - |
| dc.subject | Informativeness | - |
| dc.subject | Intelligent Tutoring Systems | - |
| dc.subject | Large Language Models | - |
| dc.title | Does Informativeness Matter? Active Learning for Educational Dialogue Act Classification | - |
| dc.type | Conference_Paper | - |
| dc.description.nature | link_to_subscribed_fulltext | - |
| dc.identifier.doi | 10.1007/978-3-031-36272-9_15 | - |
| dc.identifier.scopus | eid_2-s2.0-85159703753 | - |
| dc.identifier.volume | 13916 LNAI | - |
| dc.identifier.spage | 176 | - |
| dc.identifier.epage | 188 | - |
| dc.identifier.eissn | 1611-3349 | - |
| dc.identifier.isi | WOS:001321530100015 | - |
