Turn uncertainty into actionable learning — faster, with fewer students. Adaptive Experiments builds Experiments‑As‑a‑Service Infrastructure (EASI) to make real‑world educational field experiments simple, reproducible, and effective. Lower the barrier for teachers and researchers to test instructional designs, discover what improves learning, and deploy improvements at scale.
Run randomized and adaptive experiments (Thompson sampling & Bayesian inference) to get answers with fewer participants. Monitor results in real time with dashboards and alerts, and keep experiments reproducible with versioned configs and analysis pipelines.
Integrate with analytics, LMS, and CI for production workflows. Contribute experiments, access reproducible analysis pipelines, and use curated open training materials to scale practice.
The overall goal of EASI is to provide for the rapid creation of deployment of experiments in educational technologies.
A–Design: Help researchers investigate theories of learning and discover how to improve instruction by designing randomized field experiments on components of real-world digital educational resources.
B–Analysis: Facilitate sophisticated analysis of experiments in the context of large-scale data about student profiles, such as to discover which interventions are effective for different subgroups of students.
C–Adaptation: Enable research into adaptive experimentation by providing a testbed for algorithms that dynamically analyze data from experiments, to enhance learning faster with less participants
Run randomized and adaptive experiments (Thompson sampling & Bayesian inference) to get answers with fewer participants.
Monitor results in real time with dashboards and alerts.
Keep experiments reproducible with versioned configs and analysis pipelines.
Integrate easily with analytics, LMS, and CI for production workflows.
A concrete win: well‑designed experiments have shown small instructional tweaks can raise outcomes substantially — for example, prompting students to reflect on course relevance improved performance by roughly half a letter grade (Hulleman & Harackiewicz, 2009).1
How it works
Design: author alternative instructional versions and an experiment config.
Deploy: route users and run adaptive allocation in production.
Learn: visualize outcomes, adapt or declare winners, and export reproducible analyses.
Hulleman, C. S., & Harackiewicz, J. M. (2009). Promoting interest and performance in high school science classes. Science, 326(5958), 1410-1412. ↩ ↩