Abstract
The doer effect is the learning science principle that doing formative practice while reading in a ''learn by doing'' approach is more effective for learning than reading alone. Research on the doer effect in higher education has found doing integrated practice to be an average of six times more effective for learning, and the relationship is causal. Replication research confirming these findings has contributed to the generalizability of this learn by doing method and validated the need to increase the availability of formative practice for students. In this paper, we investigate the doer effect in a novel way-using formative practice generated through artificial intelligence (AI). While research on automatically generated questions has established performance benchmarks and ensured their validity as practice items, could these generated items also engage the doer effect? This study uses a novel data set of student exam scores combined with microlevel reading and doing engagement data collected from two sections of a Cognitive Psychology course. These data are used to follow the established correlational doer effect analysis to examine if the results can be reproduced in this context. This study also contributes a rare opportunity to review natural learning contexts from a holistic data perspective; how course policies impact student engagement, how student engagement impacts exam scores, and how the learning impact of doing practice can be measured by the doer effect. Future implications for both teaching and learning with AI-generated practice and the scalability of the doer effect are discussed.