File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Using explainable AI to unravel classroom dialogue analysis: Effects of explanations on teachers' trust, technology acceptance and cognitive load

TitleUsing explainable AI to unravel classroom dialogue analysis: Effects of explanations on teachers' trust, technology acceptance and cognitive load
Authors
Keywordsclassroom dialogue
explainable AI
interpretability
teachers
trust
Issue Date23-Apr-2024
PublisherWiley
Citation
British Journal of Educational Technology, 2024 How to Cite?
Abstract

Deep neural networks are increasingly employed to model classroom dialogue and provide teachers with prompt and valuable feedback on their teaching practices. However, these deep learning models often have intricate structures with numerous unknown parameters, functioning as black boxes. The lack of clear explanations regarding their classroom dialogue analysis likely leads teachers to distrust and underutilize these AI-powered models. To tackle this issue, we leveraged explainable AI to unravel classroom dialogue analysis and conducted an experiment to evaluate the effects of explanations. Fifty-nine pre-service teachers were recruited and randomly assigned to either a treatment (n = 30) or control (n = 29) group. Initially, both groups learned to analyse classroom dialogue using AI-powered models without explanations. Subsequently, the treatment group received both AI analysis and explanations, while the control group continued to receive only AI predictions. The results demonstrated that teachers in the treatment group exhibited significantly higher levels of trust in and technology acceptance of AI-powered models for classroom dialogue analysis compared to those in the control group. Notably, there were no significant differences in cognitive load between the two groups. Furthermore, teachers in the treatment group expressed high satisfaction with the explanations. During interviews, they also elucidated how the explanations changed their perceptions of model features and attitudes towards the models. This study is among the pioneering works to propose and validate the use of explainable AI to address interpretability challenges within deep learning-based models in the context of classroom dialogue analysis.


Persistent Identifierhttp://hdl.handle.net/10722/345960
ISSN
2023 Impact Factor: 6.7
2023 SCImago Journal Rankings: 2.425

 

DC FieldValueLanguage
dc.contributor.authorWANG, Deliang-
dc.contributor.authorBIAN, Cunling-
dc.contributor.authorCHEN, Gaowei-
dc.date.accessioned2024-09-04T07:06:46Z-
dc.date.available2024-09-04T07:06:46Z-
dc.date.issued2024-04-23-
dc.identifier.citationBritish Journal of Educational Technology, 2024-
dc.identifier.issn0007-1013-
dc.identifier.urihttp://hdl.handle.net/10722/345960-
dc.description.abstract<p>Deep neural networks are increasingly employed to model classroom dialogue and provide teachers with prompt and valuable feedback on their teaching practices. However, these deep learning models often have intricate structures with numerous unknown parameters, functioning as black boxes. The lack of clear explanations regarding their classroom dialogue analysis likely leads teachers to distrust and underutilize these AI-powered models. To tackle this issue, we leveraged explainable AI to unravel classroom dialogue analysis and conducted an experiment to evaluate the effects of explanations. Fifty-nine pre-service teachers were recruited and randomly assigned to either a treatment (<em>n</em> = 30) or control (<em>n</em> = 29) group. Initially, both groups learned to analyse classroom dialogue using AI-powered models without explanations. Subsequently, the treatment group received both AI analysis and explanations, while the control group continued to receive only AI predictions. The results demonstrated that teachers in the treatment group exhibited significantly higher levels of trust in and technology acceptance of AI-powered models for classroom dialogue analysis compared to those in the control group. Notably, there were no significant differences in cognitive load between the two groups. Furthermore, teachers in the treatment group expressed high satisfaction with the explanations. During interviews, they also elucidated how the explanations changed their perceptions of model features and attitudes towards the models. This study is among the pioneering works to propose and validate the use of explainable AI to address interpretability challenges within deep learning-based models in the context of classroom dialogue analysis.<br></p>-
dc.languageeng-
dc.publisherWiley-
dc.relation.ispartofBritish Journal of Educational Technology-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectclassroom dialogue-
dc.subjectexplainable AI-
dc.subjectinterpretability-
dc.subjectteachers-
dc.subjecttrust-
dc.titleUsing explainable AI to unravel classroom dialogue analysis: Effects of explanations on teachers' trust, technology acceptance and cognitive load-
dc.typeArticle-
dc.identifier.doi10.1111/bjet.13466-
dc.identifier.scopuseid_2-s2.0-85191077453-
dc.identifier.eissn1467-8535-
dc.identifier.issnl0007-1013-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats