File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Parameter-Efficiently Fine-Tuning Large Language Models for Classroom Dialogue Analysis

TitleParameter-Efficiently Fine-Tuning Large Language Models for Classroom Dialogue Analysis
Authors
Keywordsartificial intelligence (AI)
Classroom dialogue
dialogic move
large language model
parameter-efficient fine-tuning (PEFT)
Issue Date7-May-2025
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Learning Technologies, 2025, v. 18, p. 542-555 How to Cite?
Abstract

Researchers have increasingly utilized artificial intelligence to automatically analyze classroom dialogue, aiming to provide timely feedback to teachers due to its educational significance. However, traditional machine learning and deep learning models face challenges, such as limited performance and lack of generalizability, across various dimensions of classroom dialogue and educational contexts. Recent efforts to utilize large language models (LLMs) for classroom dialogue analysis have predominantly relied on prompt engineering techniques, primarily due to the high costs associated with full fine-tuning, which has resulted in suboptimal performance and areas needing improvement. We, therefore, propose the application of parameter-efficient fine-tuning (PEFT) techniques to enhance the performance of LLMs in classroom dialogue analysis. Specifically, we utilized low-rank adaptation, a prominent PEFT technique, to fine-tune three state-of-the-art LLMs—Llama-3.2-3B, Gemma-2-9B, and Mistral-7B-v0.3—targeting the analysis of both teachers' and students' dialogic moves within K-12 mathematics lessons. The experimental results indicate that, in comparison to fully fine-tuning BERT and RoBERTa models and prompting LLMs, LLMs fine-tuned using the PEFT technique achieve superior performance. Moreover, the PEFT approach significantly reduced the number of trainable parameters within the LLMs by over 300 times and decreased their training duration. Although the training time for PEFT-tuned LLMs was still longer than that required for fully fine-tuning BERT and RoBERTa, these LLMs demonstrated specialization in this specific dimension and generalizability to other tasks and contexts. We believe that the use of PEFT techniques presents a promising direction for future research in classroom dialogue analysis.


Persistent Identifierhttp://hdl.handle.net/10722/361916

 

DC FieldValueLanguage
dc.contributor.authorWang, Deliang-
dc.contributor.authorZheng, Yaqian-
dc.contributor.authorLi, Jinjiang-
dc.contributor.authorChen, Gaowei-
dc.date.accessioned2025-09-17T00:32:00Z-
dc.date.available2025-09-17T00:32:00Z-
dc.date.issued2025-05-07-
dc.identifier.citationIEEE Transactions on Learning Technologies, 2025, v. 18, p. 542-555-
dc.identifier.urihttp://hdl.handle.net/10722/361916-
dc.description.abstract<p>Researchers have increasingly utilized artificial intelligence to automatically analyze classroom dialogue, aiming to provide timely feedback to teachers due to its educational significance. However, traditional machine learning and deep learning models face challenges, such as limited performance and lack of generalizability, across various dimensions of classroom dialogue and educational contexts. Recent efforts to utilize large language models (LLMs) for classroom dialogue analysis have predominantly relied on prompt engineering techniques, primarily due to the high costs associated with full fine-tuning, which has resulted in suboptimal performance and areas needing improvement. We, therefore, propose the application of parameter-efficient fine-tuning (PEFT) techniques to enhance the performance of LLMs in classroom dialogue analysis. Specifically, we utilized low-rank adaptation, a prominent PEFT technique, to fine-tune three state-of-the-art LLMs—Llama-3.2-3B, Gemma-2-9B, and Mistral-7B-v0.3—targeting the analysis of both teachers' and students' dialogic moves within K-12 mathematics lessons. The experimental results indicate that, in comparison to fully fine-tuning BERT and RoBERTa models and prompting LLMs, LLMs fine-tuned using the PEFT technique achieve superior performance. Moreover, the PEFT approach significantly reduced the number of trainable parameters within the LLMs by over 300 times and decreased their training duration. Although the training time for PEFT-tuned LLMs was still longer than that required for fully fine-tuning BERT and RoBERTa, these LLMs demonstrated specialization in this specific dimension and generalizability to other tasks and contexts. We believe that the use of PEFT techniques presents a promising direction for future research in classroom dialogue analysis.</p>-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Learning Technologies-
dc.subjectartificial intelligence (AI)-
dc.subjectClassroom dialogue-
dc.subjectdialogic move-
dc.subjectlarge language model-
dc.subjectparameter-efficient fine-tuning (PEFT)-
dc.titleParameter-Efficiently Fine-Tuning Large Language Models for Classroom Dialogue Analysis -
dc.typeArticle-
dc.identifier.doi10.1109/TLT.2025.3567995-
dc.identifier.scopuseid_2-s2.0-105004597181-
dc.identifier.volume18-
dc.identifier.spage542-
dc.identifier.epage555-
dc.identifier.eissn1939-1382-
dc.identifier.issnl1939-1382-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats