Better automatic feedback for csed
Guides
July 5, 2021

Webinar: Better Automatic Feedback for Computer Science Education

In 30 seconds...

  • CodeGrade can help you improve your manual and automatic feedback.
  • Feedback top tips! Should be educative in nature (focusing on what is correct and incorrect), provide examples in feedback (code snippets), use first person, link to rubrics, give feedback in a timely manner;
  • Rubric design top tips! Use an even number of levels, limit criteria per category, avoid negative or competitive level names;
  • Creating clear tests in Autotest! Split up tests as much as possible, make tests flexible and use custom configurations where possible;
  • Finally, we go over ways to customize the output of JUnit, PyTest, PMD and Checkstyle!

In our latest webinar, we have discussed ways to improve your manual and automatic feedback using CodeGrade. This webinar was part of our monthly Focus Groups and was recorded on July 1st 2021, it is available on demand now.

Meaningful Feedback Principles

  1. Educative in Nature. Focus on what the student is doing correctly and incorrectly, use the famous feedback sandwich (Compliment, Correction, Compliment).
  2. Answers Dinham’s 4 Questions. Learners want to know where they stand in regards to their work, answering these questions regularly will help your students and improve your feedback.
    - What can the student do?
    - What can't the student do?
    - How does the student's work compare to that of others?
    - How can the student do better?
  3. Provide a Model or Example in your feedback. Demonstrate how the student can improve. E.g. using Markdown in CodeGrade for example code snippets.
  4. Use comments to teach, instead of justifying the grade. Activate your students to make improvements in future work by making links to rubric (by default in AutoTest) and course material.
  5. Use the “I-message” in your feedback. Avoid using “you” in feedback (this can be interpreted as a personal attack), instead use generic term like “people” or use the “I-message” to communicate what you observe and think about it.
    - Avoid: “You didn’t spent much time into creating a user friendly UI.”
    - Use: “I notice you did not make your UI very user friendly.”
  6. Give feedback in a timely manner. Numerous studies indicate that with feedback it really is the sooner the better. For instance, use AutoTest to provide your students with instant automated feedback.

Good rubric design principles

  1. Choose the amount of levels wisely. Encourage students with a high number of levels, show progress. Use even number of levels (so there is no bias towards middle level). For large classes, 4 is often recommended (e.g. Walvoord et al. 2011).
  2. Choose your criteria wisely. Limit the number of criteria per category (consider adding a new category) and use clear “teachable” criteria (e.g. “code quality is good” VS “camelCase naming convention was followed”).Easy to understand by students: rubric is for students and teachers, not just teachers.
  3. Review the following questions to get started (Van Leusen (2013)).
    - What knowledge and skills is the assignment designed to assess?
    - What observable criteria represent those knowledge and skills?
    - How can you best divide those criteria to represent distinct and meaningful levels of student performance?
  4. Try to avoid negative or competitive level headers (Stevens & Levi (2005)). This can discourage students, but be clear about expectations, failures and successes.
    Examples:
    Beginning, Developing, Accomplished, Exemplary
    Needs Improvement, Satisfactory, Good, Accomplished
    Emerging, Progressing, Partial Mastery, Mastery
  5. Accentuate growth mindset over fixed mindset by using activating and encouraging names and descriptions.
An example "Code Structure and Documentation" Rubric Category in CodeGrade


Supercharge your feedback by using CodeGrade's manual and automatic grading tools especially made for computer science education!

Creating clear tests in AutoTest

  1. Split up (unit) tests as much as possible. This helps motivate students and show progress better.
  2. Use I/O Test options for flexibility. Use substring match and ignore as much whitespace as possible. To prevent frustrating formatting penalties to students.
  3. Use custom configuration files designed for education. E.g. to ignore complicated or unwanted code quality messages and to focus on specific linter messages that are relevant to your assignmentt.
  4. Add custom descriptions to your AutoTest steps. With these descriptions, make sure to describe what your test assesses, how students can reproduce these steps (if applicable) and use markdown!
Custom test descriptions in CodeGrade AutoTest for a R assignment

Improving Unit Test Feedback

In the webinar, we discuss improving unit test feedback for JUnit 5 (Java) and PyTest (Python) unit testing scripts.

Customizing your JUnit 5 tests

You can add custom descriptions / names with the `@DisplayName` decorator and add custom weights with CodeGrade's built in weight parser (specifically for JUnit5). When using the `@DisplayName` decorator, do not forget to import: `org.junit.jupiter.api.DisplayName;`.

Adding custom descriptions and weights to your JUnit5 Unit Tests.

Customizing your PyTest tests

You can add custom descriptions / names and weights with the `record_xml_attribute` function in PyTest. This function is used to modify the `name` and `weight` values of the output XML file that CodeGrade parses in the Unit Test step. When using the `record_xml_attribute` function, do not forget to give it as an argument to your function.

Adding custom descriptions and weights to your PyTest unit tests.

Improving Code Quality Feedback

Finally, we also discuss how you can improve the output of your Code Quality steps in CodeGrade. We have also discussed this in a recent blog post already, which has many code snippets that are ready to use. Click here to learn more about customizing your PMD and CheckStyle Java Code Quality tests.

With this webinar, we hope to have given you a refresher on good feedback and rubric design for computer science education and inspired you to improve your (autograded) code assignments. Would you like to learn more about the things discussed in this webinar or CodeGrade? Or would you like some more help with your AutoTests? Feel free to email me at support@codegrade.com and I'd be more than happy to help you out!

Devin Hillenius

Devin Hillenius

Co-founder, Product Expert
Devin is co-founder and Product Expert at CodeGrade. During his studies Computer Science and work as a TA at the University of Amsterdam, he developed CodeGrade together with his co-founders to make their life easier. Devin supports instructors with their programming courses, focusing on both their pedagogical needs and innovative technical possibilities. He also hosts CodeGrade's monthly webinar.

Continue reading

Could Natural Language Programming change the future of coding for the better?

We can all agree, coding is hard. But, imagine if one tool could simplify the process? In this blog, Sam discusses OpenAI Codex, a platform that translates natural language into code, using AI. Could this be the start of a new era of coding?

Do you know the 3 types of coders and their needs?

Learn how to identify the three types of coders: students coding to understand, students coding as a skill and those learning coding as a career. We also explain the needs of these groups and how you can best tailor your code classroom to them.

New release CodeGrade PerfectlyNormal.2 and Happy Holidays from CodeGrade!

Happy Holidays from Team CodeGrade! We recap 2021 and would love to tell you about CodeGrade PerfectlyNormal.2, improving the Peer Feedback feature and making CodeGrade even more efficient!

Using NBGrader for Python Jupyter notebooks in CodeGrade

Learn how to autograde Python Jupyter Notebooks using the NBGrader tool in CodeGrade to give your students instant feedback.

Learn more about CodeGrade!

Grow your coding classroom
without compromise.