Automated Grading of Exercises

Automated Grading of Exercises

In automated grading, instructor-provided routines check and score student-constructed responses, and sometimes returning corrective feedback to the student.  

Automated grading can be as simple as grading software-assessed multiple choice. In more complex situations an exercise might have one correct solution that may be expressed in a number of equivalent ways, or a solution may be one instance from an open-ended set meeting given conditions.  For automation, acceptable solutions must be specified, grading standards defined, and sometimes a model of corrections to common errors developed.  

An example is a programming exercise requiring students to construct code, then run their code using instructor-provided test data.  Output is evaluated and automated routines run to identify errors in the student’s work and provide feedback.    When run in real time, iterations could occur rapidly, dynamically giving formative feedback to the student.

Given the scalability of automated grading, assessments using this technique are commonplace in large online courses such as MOOCs.

See Also

Stanford Lytics Lab