Designing programming assignments is one of the most meaningful parts of teaching—but it’s often the most time-intensive. Defining structural requirements, preventing bad patterns, and writing thorough tests all take time that instructors don’t always have.
AI can meaningfully reduce this setup time. With clear prompts, you can convert natural-language requirements into Semgrep rules or complete unit test suites in minutes—while maintaining full control over pedagogy and grading quality.
This post shows how to use AI to streamline two key components of CodeGrade assignments:
- 🧩 Semgrep rules for structure and style
- 🧪 Unit tests for correctness
You’ll also find concise, reusable prompts to support your workflow across multiple languages and assignment types.
🧩 Semgrep: AI-Assisted Structural Checks
Semgrep is ideal for enforcing how students write code—ensuring they use required patterns or avoid insecure or unintended constructs. In CodeGrade, Semgrep powers the Code Structure Test block, which can check for:
- required functions
- specific coding patterns
- forbidden libraries or functions
- stylistic or pedagogical choices
If you’re new to Semgrep or want a deeper dive into structure-based grading, the CodeGrade guide is the best place to start:
👉 Code Structure Tests with Semgrep
Rather than writing patterns manually, you can ask AI to generate exactly the pattern you need.
Use this prompt to translate natural-language requirements into valid Semgrep rules compatible with CodeGrade.
I want a Semgrep rule that detects the following behavior:
**[describe the behavior you want to match]**
Please output the rule using this exact template,
only filling in `pattern` and `languages`.
Do not change the `id`, `message`, or `severity`:
rules:
- id: untitled_rule
pattern: YOUR_PATTERN
message: Semgrep found a match
languages: [THE_PROGRAMMING_LANGUAGE]
severity: WARNING
🧪 Unit Tests: AI-Assisted Behavioral Checks
Where Semgrep checks structure, unit tests check behavior—validating correctness, edge cases, error handling, and student logic.
If you want a deeper dive into CodeGrade’s Python and Java test blocks (Pytest, JUnit5), the official guides are the best place to start:
These guides cover how the blocks work; this blog focuses on how to use AI to quickly generate the tests you’ll run inside those blocks.
Below are three modular prompts that work across languages and align naturally with CodeGrade’s Autotest blocks.
Prompt 1 — Unit Tests From an Assignment Specification
Use this when you know what the assignment must do but haven’t written a solution yet.
Generate a complete set of unit tests in **[test framework + language]**
targeting **[file/module]**.
Cover expected behavior, edge cases, and invalid inputs.
**Assignment specification:**
[paste high-level spec]
Output **only** the final test code.
Prompt 2 — Unit Tests From a Reference Solution
Use this when you already have instructor code and want test cases that reflect its behavior.
Generate unit tests based on the expected behavior of this solution code:
[paste solution code here]
The tests should target: **[filename/function/class to test]**
Use this test framework and language: **[e.g., pytest, JUnit, Jest, xUnit]**
Include standard cases, edge cases, and invalid inputs.
Output **only** the test code.
Prompt 3 — Improve an Existing Test Suite
Perfect for improving coverage or evolving an assignment over time.
Improve the following unit tests by adding missing edge cases,
meaningful failure checks, and stricter assertions.
Use the same test framework and target file.
**Existing tests:**
[paste current tests]
Produce a complete revised test file—no explanations.
🔄 Integrating These Prompts Into CodeGrade Workflows
AI-generated rules and tests integrate naturally into CodeGrade’s Autotest pipeline:
1. Requirements → Semgrep
Generate structural checks with the Semgrep prompt and drop them into a Code Structure Test block.
2. Behavior → Unit Tests
Use the unit-test prompts to validate correctness and edge cases.
3. Feedback → Autotest
Students receive fast, consistent, meaningful feedback with minimal manual grading overhead. This results in reliable, scalable assignments while dramatically reducing setup time.
After generating rules or tests, build a snapshot in CodeGrade to ensure everything functions as expected. Minor adjustments may be necessary; if errors occur, simply provide the Autotest output back to the AI assistant to generate corrected rules or test files.
For more complex setups, AI can also produce complete bash setup scripts (e.g., for installing dependencies or preparing data), keeping in mind that the AutoTest shell is non-interactive.
This workflow enables instructors to prepare and refine assignments efficiently while maintaining high-quality automated assessment. Try one of these prompts in your next assignment setup and experience how much faster the process becomes.
Have questions about improving your Autotest setups or integrating AI into your workflow? Our team is here to help at support@codegrade.com.
%20(800%20x%20525%20px).png)
.png)