Improve feedback for ICT education and studies
Articles
April 2, 2019

Relieving workload on teachers and improving feedback for students

In 30 seconds...

As many students and teachers experience, learning to code is hard and challenging. It is a skill that needs to be learned by doing, and a skill that is learned by making mistakes and learning from these mistakes. Without sufficient feedback, however, it is challenging for students to know how and where to improve, and it might impair their motivation too.

On the other hand, assessing students individually and giving sufficient feedback is hard and time-consuming for teachers. Especially with the growing demand for ICT and Computer Science degrees, teachers can only spend less time per student.

Another issue often seen in higher education, is that assessment is done similar to other studies, where the only summative assessment is at the end of the course in the form of a final exam. This is not particularly suited to programming courses, where students need feedback constantly and have to spend time learning to program. Graded weekly or biweekly assignments, for example, could help solve this. It is necessary, though, to give not only summative feedback, in the form of a grade, but also formative feedback, to let a student know what they did wrong and how to improve on it. Even if this feedback can be given qualitative, it is important that students receive it timely, and not weeks after they submitted their assignments.

This means that the workload on teachers is high, and often it is impossible to achieve all these three goals.

To be able to give timely feedback, automated assessment could help solve this. In grading programming assignments, manually testing if it the functionality is working correctly, is hugely time-consuming. Being able to automatically assess this, would only require teachers to set up automatic testing scripts once for all students (which can be potentially used for multiple years).

Start engaging students and saving time with automatic grading now!

So what are the options for automated testing? The most simple form is input/output testing. You specify a certain input to a function or program and specify what output you expect, possibly with a regex to make it a bit more advanced. For simple assignments this could be sufficient. Coupling this with rubric items also allows students to get a bit more formative feedback than just “this is what we expected, this is what your program returned”.

A more advanced form of automated testing can be achieved with unit tests. Unit testing frameworks are ubiquitous and perfectly suited to do advanced testing of code. It also allows to give more meaningful feedback to students.

Besides functionality testing, style and structure can be automatically assessed using linters and code climate checkers (also tools which are ubiquitously used in programming production code).

Automated testing, however, is not the be-all and end-all, a manual check of the teacher would still be recommended to check if the automated assessment is correct and to provide extra formative feedback to students. A complimentary assessment between human and machine would make sure students get timely and qualitative feedback, while the workload for teachers doesn’t go through the roof.

In the coming months, we will release CodeGrade AutoTest, which allows teachers to set up simple and advanced automated testing at the press of a button, with the ability to provide qualitative feedback to students.

References

  1. J. Hamari, J. Koivisto, and H. Sarsa, “Does gamification work?–a literature review of empirical studies on gamification,” in 2014 47th Hawaii international conference on system sciences (HICSS), 2014, pp. 3025–3034.
  2. H. Keuning, J. Jeuring, and B. Heeren, “Towards a systematic review of automated feedback generation for programming exercises,” in Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education, 2016, pp. 41–46.
  3. J. Bennedsen and M. E. Caspersen, “Failure rates in introductory programming,” AcM SIGcSE Bulletin, vol. 39, no. 2, pp. 32–36, 2007.
  4. D. Boud and E. Molloy, Feedback in higher and professional education: understanding it and doing it well. Routledge, 2013.
  5. M. E. Caspersen and J. Bennedsen, “Instructional design of a programming course: a learning theoretic approach,” in Proceedings of the third international workshop on Computing education research, 2007, pp. 111–122.
  6. J. Hattie and H. Timperley, “The power of feedback,” Review of educational research, vol. 77, no. 1, pp. 81–112, 2007.
  7. S. Narciss, “Feedback strategies for interactive learning tasks,” Handbook of research on educational communications and technology, vol. 3, pp. 125–144, 2008.
  8. V. J. Shute, “Focus on formative feedback,” Review of educational research, vol. 78, no. 1, pp. 153–189, 2008.
Youri Voet

Youri Voet

Co-founder and CEO
Youri Voet is co-founder and CEO at CodeGrade. During his studies Computer Science and work as a Teaching Assistant at the University of Amsterdam, he developed CodeGrade together with his co-founders to make their own life easier. Youri works together with many educational institutions to continue to offer them the best CodeGrade experience possible.

Continue reading

Could Natural Language Programming change the future of coding for the better?

We can all agree, coding is hard. But, imagine if one tool could simplify the process? In this blog, Sam discusses OpenAI Codex, a platform that translates natural language into code, using AI. Could this be the start of a new era of coding?

Do you know the 3 types of coders and their needs?

Learn how to identify the three types of coders: students coding to understand, students coding as a skill and those learning coding as a career. We also explain the needs of these groups and how you can best tailor your code classroom to them.

New release CodeGrade PerfectlyNormal.2 and Happy Holidays from CodeGrade!

Happy Holidays from Team CodeGrade! We recap 2021 and would love to tell you about CodeGrade PerfectlyNormal.2, improving the Peer Feedback feature and making CodeGrade even more efficient!

Using NBGrader for Python Jupyter notebooks in CodeGrade

Learn how to autograde Python Jupyter Notebooks using the NBGrader tool in CodeGrade to give your students instant feedback.

Learn more about CodeGrade!

Grow your coding classroom
without compromise.