File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TLT.2025.3567995
- Scopus: eid_2-s2.0-105004597181
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Parameter-Efficiently Fine-Tuning Large Language Models for Classroom Dialogue Analysis
| Title | Parameter-Efficiently Fine-Tuning Large Language Models for Classroom Dialogue Analysis |
|---|---|
| Authors | |
| Keywords | artificial intelligence (AI) Classroom dialogue dialogic move large language model parameter-efficient fine-tuning (PEFT) |
| Issue Date | 7-May-2025 |
| Publisher | Institute of Electrical and Electronics Engineers |
| Citation | IEEE Transactions on Learning Technologies, 2025, v. 18, p. 542-555 How to Cite? |
| Abstract | Researchers have increasingly utilized artificial intelligence to automatically analyze classroom dialogue, aiming to provide timely feedback to teachers due to its educational significance. However, traditional machine learning and deep learning models face challenges, such as limited performance and lack of generalizability, across various dimensions of classroom dialogue and educational contexts. Recent efforts to utilize large language models (LLMs) for classroom dialogue analysis have predominantly relied on prompt engineering techniques, primarily due to the high costs associated with full fine-tuning, which has resulted in suboptimal performance and areas needing improvement. We, therefore, propose the application of parameter-efficient fine-tuning (PEFT) techniques to enhance the performance of LLMs in classroom dialogue analysis. Specifically, we utilized low-rank adaptation, a prominent PEFT technique, to fine-tune three state-of-the-art LLMs—Llama-3.2-3B, Gemma-2-9B, and Mistral-7B-v0.3—targeting the analysis of both teachers' and students' dialogic moves within K-12 mathematics lessons. The experimental results indicate that, in comparison to fully fine-tuning BERT and RoBERTa models and prompting LLMs, LLMs fine-tuned using the PEFT technique achieve superior performance. Moreover, the PEFT approach significantly reduced the number of trainable parameters within the LLMs by over 300 times and decreased their training duration. Although the training time for PEFT-tuned LLMs was still longer than that required for fully fine-tuning BERT and RoBERTa, these LLMs demonstrated specialization in this specific dimension and generalizability to other tasks and contexts. We believe that the use of PEFT techniques presents a promising direction for future research in classroom dialogue analysis. |
| Persistent Identifier | http://hdl.handle.net/10722/361916 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Wang, Deliang | - |
| dc.contributor.author | Zheng, Yaqian | - |
| dc.contributor.author | Li, Jinjiang | - |
| dc.contributor.author | Chen, Gaowei | - |
| dc.date.accessioned | 2025-09-17T00:32:00Z | - |
| dc.date.available | 2025-09-17T00:32:00Z | - |
| dc.date.issued | 2025-05-07 | - |
| dc.identifier.citation | IEEE Transactions on Learning Technologies, 2025, v. 18, p. 542-555 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/361916 | - |
| dc.description.abstract | <p>Researchers have increasingly utilized artificial intelligence to automatically analyze classroom dialogue, aiming to provide timely feedback to teachers due to its educational significance. However, traditional machine learning and deep learning models face challenges, such as limited performance and lack of generalizability, across various dimensions of classroom dialogue and educational contexts. Recent efforts to utilize large language models (LLMs) for classroom dialogue analysis have predominantly relied on prompt engineering techniques, primarily due to the high costs associated with full fine-tuning, which has resulted in suboptimal performance and areas needing improvement. We, therefore, propose the application of parameter-efficient fine-tuning (PEFT) techniques to enhance the performance of LLMs in classroom dialogue analysis. Specifically, we utilized low-rank adaptation, a prominent PEFT technique, to fine-tune three state-of-the-art LLMs—Llama-3.2-3B, Gemma-2-9B, and Mistral-7B-v0.3—targeting the analysis of both teachers' and students' dialogic moves within K-12 mathematics lessons. The experimental results indicate that, in comparison to fully fine-tuning BERT and RoBERTa models and prompting LLMs, LLMs fine-tuned using the PEFT technique achieve superior performance. Moreover, the PEFT approach significantly reduced the number of trainable parameters within the LLMs by over 300 times and decreased their training duration. Although the training time for PEFT-tuned LLMs was still longer than that required for fully fine-tuning BERT and RoBERTa, these LLMs demonstrated specialization in this specific dimension and generalizability to other tasks and contexts. We believe that the use of PEFT techniques presents a promising direction for future research in classroom dialogue analysis.</p> | - |
| dc.language | eng | - |
| dc.publisher | Institute of Electrical and Electronics Engineers | - |
| dc.relation.ispartof | IEEE Transactions on Learning Technologies | - |
| dc.subject | artificial intelligence (AI) | - |
| dc.subject | Classroom dialogue | - |
| dc.subject | dialogic move | - |
| dc.subject | large language model | - |
| dc.subject | parameter-efficient fine-tuning (PEFT) | - |
| dc.title | Parameter-Efficiently Fine-Tuning Large Language Models for Classroom Dialogue Analysis | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1109/TLT.2025.3567995 | - |
| dc.identifier.scopus | eid_2-s2.0-105004597181 | - |
| dc.identifier.volume | 18 | - |
| dc.identifier.spage | 542 | - |
| dc.identifier.epage | 555 | - |
| dc.identifier.eissn | 1939-1382 | - |
| dc.identifier.issnl | 1939-1382 | - |
