Agentic AI that coaches your students — automatically.
Autonomous AI agents that grade, coach, and personalize at scale — so every student gets the feedback they need, every milestone, without adding to your workload.
AI coding agents have fundamentally changed what it means to learn software engineering. Students who cannot write a loop from memory can still ship a working application. That is not a problem to solve — it is a reality to design around.
“Are your students developing the judgment, discipline, and professional habits that no AI tool can substitute?”
Syntax is learnable in a weekend. Engineering — the way you decompose a problem, manage complexity, collaborate under pressure, and own a codebase — takes deliberate practice across months. And it leaves traces.
The Problem
Software engineering assessment has a structural flaw.
Universities teach agile development, version control, and collaborative engineering. Then they assess students on a final submission. The mismatch between what we teach and what we measure undermines both.
Output-only assessment is obsolete
In the AI era, producing working code is no longer evidence of understanding. A student can ship a complete application without writing a single meaningful line themselves. But most rubrics still only look at what was submitted.
Feedback arrives too late to matter
Students receive marks 1–2 weeks after a milestone closes. By then the next milestone has started. Feedback that could have changed behaviour never gets the chance to land.
Individual learning is invisible
Team projects hide who actually learned to use version control, write meaningful pull requests, or contribute consistently. One strong student can carry a group — and the rest graduate without the skills.
Rubrics exist on paper but go unenforced
When students know that commit quality, branching, and CI pipelines will not actually be checked in depth, they take shortcuts — and learn the shortcuts instead of the practice.
What We Measure
It's not in the final submission. It's in the process.
Professional software engineering leaves an evidence trail. We read it.
Incremental commits, not last-night dumps
Commit frequency and distribution across the milestone period reveal work habits that a final submission never could.
Branches and PRs as collaboration tools
Feature branching, merge request hygiene, and peer review participation — not bureaucratic checkboxes, but evidence of professional workflow.
CI pipelines owned and maintained
Whether students set up, maintain, and act on automated testing pipelines — or ignore failing builds entirely.
Code reviews that improve the work
Review participation, comment quality, and responsiveness to feedback show whether students are learning to collaborate — not just co-exist.
Milestones hit incrementally
Issue tracking, milestone assignment, and task completion patterns show whether students plan and execute — or scramble at the deadline.
Tests written, not just passing
AI can generate tests, but knowing what to test and when requires engineering judgment. We measure test coverage trends and whether tests were written alongside features or added as an afterthought.
How It Works
Three steps from setup to insight.
No new tooling for students. Works with GitLab they already use.
01
Connect
Link your GitLab course group in under five minutes. Every student repository is discovered automatically — no manual configuration per team.
02
Assess
Our engine reads the full history of every repository — every commit, branch, pull request, CI run, and code review — and scores it against a professional engineering rubric.
03
Act
Educators see a cohort dashboard to catch struggling students early. Students get personalised, actionable feedback while there is still time to change.
For Educators
Assessment that reflects how software is actually built.
Move beyond "does it run?" to "did they engineer it?"
Consistent, rubric-aligned scoring across every student — no manual repo-trawling
Spot disengagement weeks before the submission deadline
Built-in similarity detection across your cohort
Data to support curriculum review, accreditation, and quality assurance
Cut assessment time by up to 70%
For Students
Feedback that builds a professional, not just a grade.
Understand where your habits fall short of industry expectations — while you can still act on it
See your progress against a career ladder benchmarked on real engineering practice
Build a skill record that says more than a transcript
Feedback posted after each milestone so you can improve before the final submission
For Universities
Pedagogy for the AI Era
“With AI coding tools in every student's hands, the question is no longer whether students can produce code — it's whether they understand what they're building and why. Traditional assessments were not designed for this. Ours was.”
Aligns with graduate outcome frameworks and accreditation requirements
Integrates with your existing GitLab infrastructure — no new tooling for students
Designed for lecture-scale cohorts: one educator, thirty repositories, zero compromise on assessment quality
FAQ
Common questions
Do students need to change how they use GitLab?
No. Students continue using GitLab exactly as before — committing, branching, opening merge requests, and closing issues. The system reads their existing activity. Nothing is added to their workflow.
How does this handle AI-generated code?
AI coding tools can generate code, but they cannot simulate a consistent commit history, peer review participation, or incremental task completion over weeks. Our criteria specifically target the engineering process that AI cannot replicate on behalf of a student.
Is this suitable for accreditation purposes?
Yes. The system generates evidence reports aligned with AQF graduate outcomes and ABET/ACS-style criteria. Detailed per-student criterion evidence is exportable for portfolio-based accreditation submissions.
Can we customise the rubric for our unit?
Yes. Rubrics are fully configurable per unit, including enabling or disabling criteria, adjusting weights, and setting grade thresholds. Custom criterion definitions are also supported.
Where is student data stored? What about FERPA / GDPR?
All data is stored on infrastructure you control (self-hosted) or in Australian data centres (cloud-hosted). We do not sell or share student data. Deployment agreements include DPA terms for GDPR and FERPA compliance.
How is data kept secure?
All traffic is encrypted in transit via HTTPS/TLS. Data at rest is encrypted on managed storage. GitLab credentials are stored as encrypted tokens scoped to read-only access — the system never writes to student repositories. Role-based access control ensures educators only see their own units, and students can only view their own feedback. No student data is sent to third-party services unless you explicitly configure an LLM provider under your own credentials.
How does passwordless authentication work for educators?
Educators log in using passkeys — a phishing-resistant, password-free standard built on WebAuthn. On registration, a passkey is created and stored in your device's secure enclave (Face ID, Touch ID, Windows Hello, or a hardware key). On subsequent logins, your device proves identity without sending any secret over the network. No passwords to remember, no risk of credential stuffing or phishing.
How long does a marking run take?
A full-cohort run on 40 repositories typically completes in 3–8 minutes, depending on repository size and history depth. Single-repo runs take under 60 seconds. Results are available immediately after completion.
The way we teach software engineering needs to catch up with the way software is built.
See your own course data through a professional engineering lens.