Abstract

Learning engineering integrates human-centered design with data-informed decision-making to enhance learning experiences. The Learning Engineering Process (LEP) model provides a structured, iterative framework for addressing diverse educational challenges. This paper explores the responsible development of large language model (LLM)-based educational tools within an ereader platform, emphasizing how the LEP model guided the process. The central challenge was designing LLM-enabled features that could address persistent learning challenges, especially for at-risk higher education students, while adhering to responsible AI principles and learner-centered design. The case study highlights the design phase, focusing on a collaborative, design-led workshop that defined target learner profiles, identified unmet needs, and ideated solutions. The process was shaped by learning science and responsible AI principles, such as transparency and accountability, leading to the creation of prototypes for a content simplifier and personalized feedback tool. Testing with target learners found the content simplification and personalized feedback features resonated with students, suggesting a potentially meaningful way to increase content comprehension and confidence. These features were developed and released in fall of 2024 for select etextbooks and initial data collected from faculty partners who use the books in their courses indicate positive performance and feedback. This case study illustrates how an iterative learning engineering approach—guided by ethical AI practices, learning science research, and learner-centered design—can facilitate the development of scalable, impactful solutions for learners.