Abstract
Incorporating formative practice with text content in a learning by doing approach has been proven to be a highly effective learning method, and therefore questions generated using artificial intelligence (AI) were developed to scale this approach as broadly as possible. Millions of AI-generated questions were incorporated into thousands of digital textbooks as a study tool available to learners using those books for any learning context. While this advancement of AI for learning tools is itself significant, the volume of microlevel data collected from the digital environment makes it possible to investigate how learners are interacting with this learning tool in detail. In this study, we leverage this large data set to investigate student interaction patterns with AI-generated questions, focusing specifically on how students persist when their first attempt at a question is incorrect. The aggregated data shows differences in interaction patterns by question type, while comparison to a single course in which the questions were assigned finds differences in patterns likely due to context. The implications of these interaction patterns and future research avenues are discussed.