Bookshelf CoachMe® Research Study

The VitalSource Learning Science team is seeking instructors to participate in a research study using Bookshelf CoachMe as part of their class instruction. It’s easy. No changes to your teaching. Just add a Bookshelf CoachMe participation score.

How does Bookshelf CoachMe impact student learning outcomes?

Bookshelf CoachMe is a free study tool in the e-reader available in more than 9,000 textbooks. As students read their textbook, Bookshelf CoachMe provides frequent formative practice questions that give students the opportunity to actively engage with the content and practice at the point of learning. The Bookshelf CoachMe questions are generated using an AI automatic question generation system that has been rigorously evaluated in peer reviewed research.

Research Foundation

Research from Carnegie Mellon University on the Doer Effect has proven that doing practice while reading causes better outcomes. [6-8] This research has been replicated using Acrobatiq courseware. [9-11] Faculty teaching with SmartStart courseware incentivized the AI-generated practice and found students increased their engagement with the content and practice and had higher mean exam scores compared to previous semesters. [12] This research provides the basis for the approach and protocols of this study.

Emerging Findings

Faculty across four universities are currently participating in the Bookshelf CoachMe research study. We have found that on average, student textbook use triples compared to prior semesters. Faculty have reported qualitative observations that student discussions (written and oral) have improved, as well as the quality of written assignments and projects. In courses with summative assessments, we have found improvements in exam scores, especially for students in the 25th and 50th percentiles. These initial findings are positive, but more use-cases are needed.

Objective of the Study

The objective of this study is to investigate how doing the Bookshelf CoachMe practice questions while reading the textbook content impacts students learning outcomes.

To identify any impacts to student learning outcomes when the Bookshelf CoachMe practice questions are required as part of the course grade.

To gather faculty and student perceptions of the Bookshelf CoachMe questions used during the course.

Benefits of Participation

Learn how to apply a beneficial learning method that is already available in your Bookshelf
e-textbook to your teaching and learning practice.

Contribute valuable insights into the use of AI generated formative practice on student behavior and outcomes that can help faculty at your institution and others.

The VitalSource Learning Science team directing this research program can support the IRB process and co-author presentations or publications if you are interested in formally sharing the results of this project.

Investigating the benefits of using AI based on learning science in the classroom can provide faculty with the opportunity for additional institutional support, such as fellowships, grants, and awards.

Participant Criteria

To be considered for participation in this research study the following criteria must be met.

Time Commitment for Participants

We are aware of the busy schedules faculty have and aim to minimize the time commitment involved in this study. We ask for:

A one-hour meeting prior to the start of the semester to review the feature, study, and gather details about your course

A 30-minute review at the end of the semester to gather your feedback

Students to complete a short survey at the end of the term.

VitalSource will send weekly emails with a link to your data report

How to Participate

Schedule a time to discuss your potential participation.

Recognized Research

VitalSource Honored for Innovative Use of AI Tool to Improve Student Learning

2023 CODiE Award- Best Use of Artificial Intelligence in EdTech

  • VitalSource Receives Industry Award for  AI-PoweredStudy Coach

1EdTech Silver Award (Digital Resource, eText, and Learning App Innovation)

VitalSource Received Industry Award for AI-Powered Study Coach

Discover additional information about the VitalSource Learning Science Team

References

  1. Van Campenhout, R., Dittel, J. S., Jerome, B., & Johnson, B. G. (2021). Transforming textbooks into learning by doing environments: An evaluation of textbook-based automatic question generation. Third Workshop on Intelligent Textbooks at the 22nd International Conference on Artificial Intelligence in Education CEUR Workshop Proceedings, 1–12. https://ceur-ws.org/Vol-2895/paper06.pdf
  2. Johnson, B. G., Dittel, J. S., Van Campenhout, R., & Jerome, B. (2022). Discrimination of automatically generated questions used as formative practice. Proceedings of the Ninth ACM Conference on Learning@Scale, 325-329. https://doi.org/10.1145/3491140.3528323
  3. Van Campenhout, R., Hubertz, M., & Johnson, B. G. (2022). Evaluating AI-generated questions: A mixed-methods analysis using question data and student perceptions. In M. M. Rodrigo, N. Matsuda, A. I. Cristea, V. Dimitrova (Eds.), Artificial Intelligence in Education. AIED 2022. Lecture Notes in Computer Science, vol 13355, 344–353. Springer, Cham. https://doi.org/10.1007/978-3-031-11644-5_28
  4. Van Campenhout, R., Clark, M., Jerome, B., Dittel, J. S., & Johnson, B. G. (2023). Advancing intelligent textbooks with automatically generated practice: A large-scale analysis of student data. 5th Workshop on Intelligent Textbooks. The 24th International Conference on Artificial Intelligence in Education, 15-28. https://intextbooks.science.uu.nl/workshop2023/files/itb23_s1p2.pdf
  5. Van Campenhout, R., Clark, M., Dittel, J. S., Brown, N., Benton, R., & Johnson, B. G. (2023). Exploring student persistence with automatically generated practice using interaction patterns. 2023 International Conference on Software, Telecommunications and Computer Networks (SoftCOM), 1–6. https://doi.org/10.23919/SoftCOM58365.2023.10271578
  6. Koedinger, K., Kim, J., Jia, J., McLaughlin, E., & Bier, N. (2015). Learning is not a spectator sport: Doing is better than watching for learning from a MOOC. Proceedings of the Second ACM Conference on Learning@Scale, 111–120. https://doi.org/10.1145/2724660.2724681
  7. Koedinger, K. R., McLaughlin, E. A., Jia, J. Z., & Bier, N. L. (2016l). Is the doer effect a causal relationship? How can we tell and why it’s important. Proceedings of the Sixth International Conference on Learning Analytics & Knowledge, 388–397. http://dx.doi.org/10.1145/2883851.2883957
  8. Koedinger, K. R., Scheines, R., & Schaldenbrand, P. (2018). Is the doer effect robust across multiple data sets? Proceedings of the 11th International Conference on Educational Data Mining, EDM 2018, 369–375. https://eric.ed.gov/?id=ED593206
  9. Van Campenhout, R., Johnson, B. G., & Olsen, J. A. (2021). The Doer Effect: Replicating Findings that Doing Causes Learning. Presented at eLmL 2021: The Thirteenth International Conference on Mobile, Hybrid, and On-line Learning, 1–6. https://www.thinkmind.org/index.php?view=article&articleid=elml_2021_1_10_58001
  10. Van Campenhout, R., Johnson, B. G., & Olsen, J. A. (2022). The Doer Effect: Replication and comparison of correlational and causal analyses of learning. International Journal On Advances in Systems and Measurements, 48-59. https://www.iariajournals.org/systems_and_measurements/sysmea_v15_n12_2022_paged.pdf
  11. Van Campenhout, R., Jerome, B., & Johnson, B. G. (2023). The Doer Effect at Scale: Investigating Correlation and Causation Across Seven Courses. In LAK23: 13th International Learning Analytics and Knowledge Conference (LAK 2023), 357-365. https://doi.org/10.1145/3576050.3576103
  12. Hubertz, M., & Van Campenhout, R. (2023). Leveraging learning by doing in online psychology courses: Replicating engagement and outcomes. The Fifteenth International Conference on Mobile, Hybrid, and On-line Learning, 46–49. https://www.thinkmind.org/index.php?view=article&articleid=elml_2023_2_60_50025