Abstract

In recent years, artificial intelligence has been leveraged to develop an automatic question generation (AQG) system that places formative practice questions alongside textbook content in an ereader platform. Engaging with formative practice while reading is a highly effective learning strategy. AQG made it possible to scale this method to thousands of textbooks and millions of students for free. Previous research studies used aggregated data from all questions answered by all students to complete the largest evaluation of the performance metrics for automatically generated questions. However, these studies also indicated that when assigned in a classroom setting, student behavior and question performance metrics would differ. In this study, we evaluate data collected from 19 course sections taught by four faculty members at Iowa State University to gain a broader understanding of how students engage with these AI-generated practice questions when part of their university courses. Implementation strategies for the courses, student engagement, and question performance metrics are analyzed, and implications for further use in higher education classrooms is discussed.