Abstract
Advances in artificial intelligence and automatic question generation have made it possible to create millions of questions to apply an evidence-based learn by doing method to thousands of e-textbooks, an unprecedented scale. Yet the scaling of this learning method presents a new challenge: how to monitor the quality of these automatically generated questions and take action as needed when human review is not feasible. To address this issue, an adaptive system called the Content Improvement Service was developed to become an automated part of the platform architecture. Rather than adapting content or a learning path based on student mastery, this adaptive system uses student data to evaluate question quality to optimize the learning environment in real time. In this paper, we will address the theoretical context for a platform-level adaptive system, describe the methods by which the Content Improvement Service functions, and provide examples of questions identified and removed through these methods. Future research applications are also discussed.