Abstract
Decades of research have focused on the feedback delivered to students after answering questions—when to deliver feedback and what kind of feedback is most beneficial for learning. While there is a well-established body of research on feedback, new advances in technology have led to new methods for developing feedback and large-scale usage provides new data for understanding how feedback impacts learners. This paper focuses on feedback that was developed using artificial intelligence for an automatic question generation system. The automatically generated questions were placed alongside text as a formative learning tool in an e-reader platform. Three types of feedback were randomized across the questions: outcome feedback, context feedback, and common answer feedback. In this study, we investigate the effect of different feedback types on student behavior. This analysis is significant to the expanding body of research on automatic question generation, as little research has been reported on automatically generated feedback specifically, as well as the additional insights that microlevel data can reveal on the relationship between feedback and student learning behaviors.