If you’ve ever felt overwhelmed trying to give students meaningful feedback in a programming course, especially as class sizes grow, you’re not alone. Timely, consistent grading is tough to maintain when you’re juggling dozens or even hundreds of submissions. That’s exactly the challenge Dr. Dainius Masiliūnas from Wageningen University set out to address in a recent research project on automatic student submission testing using CodeGrade.
This wasn’t just an experiment in convenience. It was a deep dive into how automated testing could change the way we teach programming, from how fast students get feedback to how reliably we catch bugs, and even how we design courses for long-term sustainability. If you’ve been considering integrating more automation into your course, the findings here might give you a helpful roadmap.
Why the Research Happened
Programming education depends heavily on feedback loops. But as Dr. Masiliūnas saw in his own Geoscripting course, manually grading every submission simply doesn’t scale. There were long delays between student work and instructor feedback, inconsistent evaluations between graders, and major time sinks just to set up grading environments—particularly for assignments in R.
The goal of the research was to find out whether automatic testing, set up through CodeGrade, could solve these bottlenecks while still keeping students engaged and learning effectively.
Setting Up Smart Autotesting for R
The research team started by extending CodeGrade’s autotest features—which work well for Python—to support R. That turned out to be more complex than expected. A typical R environment setup involves installing about fifteen packages, which originally took around twenty minutes per virtual machine.
To speed things up, they made use of a custom package repository called r2u, hosting all the necessary packages and dependencies in one place. Once this was in place, environment setup dropped to less than thirty seconds. Every time a student pushed their code, a fresh VM loaded instantly, ran the test suite, and delivered feedback through CodeGrade—no manual setup required.