File Download
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/978-3-031-98459-4_28
- Scopus: eid_2-s2.0-105012025127
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Human Tutoring Improves the Impact of AI Tutor Use on Learning Outcomes
| Title | Human Tutoring Improves the Impact of AI Tutor Use on Learning Outcomes |
|---|---|
| Authors | |
| Keywords | AIED EdTech Human-AI Tutoring ITS Program Implementation |
| Issue Date | 20-Jul-2025 |
| Abstract | High-impact (human) tutoring and computer-based AI tutors are widely recognized for their effectiveness in supporting learning. However, human tutoring is costly and difficult to scale, whereas AI tutors vary widely in their ability to adapt to students’ academic and motivational needs. Our study presents a formative evaluation of a year-long implementation of virtual human-AI tutoring during the classroom use of AI tutors. Using year-long log data and standardized state tests, we examine the real-world impact of human-AI tutoring through measures of learning both within the AI tutor and on external standardized assessments. Through propensity score matching, we compare 356 seventh-grade students who received human-AI tutoring with 317 from the previous school year who received AI-only tutoring. The human-AI group demonstrated significantly higher growth and was 0.36 grade levels ahead by year’s end. Although there was no overall difference in state test scores, we found a significant interaction between human-AI tutoring and time-on-task (i.e., AI tutor use). For each standard deviation (3.26 h) increase in AI tutor use, the human-AI group improved by 0.28 standard deviations on state tests, compared to 0.06 for the AI-only group. This finding suggests that human tutors enhance the benefits of AI tutors, with gains increasing with time-on-task. Our findings replicate prior studies on human-AI tutoring over a longer time scale, spanning an entire school year. An important insight of this work is that students’ AI tutor usage data, particularly time-on-task, can serve as a valuable indicator of learning progress as well as a measure for identifying students in need of additional support to fully benefit from AI tutors. |
| Persistent Identifier | http://hdl.handle.net/10722/358760 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Gurung, Ashish | - |
| dc.contributor.author | Lin, Jionghao | - |
| dc.contributor.author | Gutterman, Jordan | - |
| dc.contributor.author | Thomas, R. Danielle | - |
| dc.contributor.author | Houk, Alex | - |
| dc.contributor.author | Gupta, Shivang | - |
| dc.contributor.author | Brunskill, Emma | - |
| dc.contributor.author | Branstetter, Lee | - |
| dc.contributor.author | Aleven, Vincent | - |
| dc.contributor.author | Koedinger, Kenneth | - |
| dc.date.accessioned | 2025-08-13T07:47:51Z | - |
| dc.date.available | 2025-08-13T07:47:51Z | - |
| dc.date.issued | 2025-07-20 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/358760 | - |
| dc.description.abstract | <p>High-impact (human) tutoring and computer-based AI tutors are widely recognized for their effectiveness in supporting learning. However, human tutoring is costly and difficult to scale, whereas AI tutors vary widely in their ability to adapt to students’ academic and motivational needs. Our study presents a formative evaluation of a year-long implementation of virtual human-AI tutoring during the classroom use of AI tutors. Using year-long log data and standardized state tests, we examine the real-world impact of human-AI tutoring through measures of learning both within the AI tutor and on external standardized assessments. Through propensity score matching, we compare 356 seventh-grade students who received human-AI tutoring with 317 from the previous school year who received AI-only tutoring. The human-AI group demonstrated significantly higher growth and was 0.36 grade levels ahead by year’s end. Although there was no overall difference in state test scores, we found a significant interaction between human-AI tutoring and time-on-task (i.e., AI tutor use). For each standard deviation (3.26 h) increase in AI tutor use, the human-AI group improved by 0.28 standard deviations on state tests, compared to 0.06 for the AI-only group. This finding suggests that human tutors enhance the benefits of AI tutors, with gains increasing with time-on-task. Our findings replicate prior studies on human-AI tutoring over a longer time scale, spanning an entire school year. An important insight of this work is that students’ AI tutor usage data, particularly time-on-task, can serve as a valuable indicator of learning progress as well as a measure for identifying students in need of additional support to fully benefit from AI tutors.</p> | - |
| dc.language | eng | - |
| dc.relation.ispartof | International Conference on Artificial Intelligence in Education (22/07/2025-26/07/2025, Palermo) | - |
| dc.subject | AIED | - |
| dc.subject | EdTech | - |
| dc.subject | Human-AI Tutoring | - |
| dc.subject | ITS | - |
| dc.subject | Program Implementation | - |
| dc.title | Human Tutoring Improves the Impact of AI Tutor Use on Learning Outcomes | - |
| dc.type | Conference_Paper | - |
| dc.description.nature | preprint | - |
| dc.identifier.doi | 10.1007/978-3-031-98459-4_28 | - |
| dc.identifier.scopus | eid_2-s2.0-105012025127 | - |
| dc.identifier.volume | 15880 | - |
| dc.identifier.spage | 393 | - |
| dc.identifier.epage | 407 | - |
