How does AI compare to human feedback?

How does artificial intelligence compare to human feedback? A meta-analysis of performance, feedback perception, and learning dispositions

Last updated 5 months ago

This exploratory meta-analysis synthesises current research on the effectiveness of Artificial Intelligence (AI)-generated feedback compared to traditional human-provided feedback. Drawing on 41 studies involving a total of 4813 students, the findings reveal no statistically significant differences in learning performance between students who received AI-generated feedback and those who received human-provided feedback. The pooled effect size was small and statistically insignificant (Hedge’s g = 0.25, CI [−0.11; 0.60]), indicating that AI feedback is potentially as effective as human feedback. A separate meta-analysis focusing exclusively on studies in the domain of language and writing confirmed similar findings, with high heterogeneity persisting (I2 = 95%). The study further explored differences in feedback perception and found a small, negative, and statistically insignificant effect size (Hedge’s g = −0.20, CI [−0.67; 0.27]). The study advocates for a hybrid approach, leveraging the scalability of AI while retaining the deep, empathetic, and contextual features of human feedback.


Kaliisa, R., Misiejuk, K., López-Pernas, S., & Saqr, M. (2025). How does artificial intelligence compare to human feedback? A meta-analysis of performance, feedback perception, and learning dispositions. Educational Psychology, 1–32. https://doi.org/10.1080/01443410.2025.2553639