Semi-Supervised Verified Feedback Generation

Students have enthusiastically taken to online programming lessons and contests. Unfortunately, they tend to struggle due to lack of personalized feedback when they make mistakes. The overwhelming number of submissions precludes manual evaluation. There is an urgent need of program analysis and repair techniques capable of handling both the scale and variations in student submissions, while ensuring quality of feedback.

Towards this goal, we present a novel methodology called semi-supervised verified feedback generation. We cluster submissions by solution strategy and ask the instructor to identify or add a correct submission in each cluster. We then verify every submission in a cluster against the instructor-validated submission in the same cluster. If faults are detected in the submission then feedback suggesting fixes to them is generated. Clustering reduces the burden on the instructor and also the variations that have to be handled during feedback generation. The verified feedback generation ensures that only correct feedback is generated.

We have applied this methodology to iterative dynamic programming (DP) assignments. Our clustering technique uses features of DP solutions. We have designed a novel counter-example guided feedback generation algorithm capable of suggesting fixes to all faults in a submission. In an evaluation on 2226 submissions to 4 problems, we could generate verified feedback for 1911 (85%) submissions in 1.6s each on an average. Our technique does a good job of reducing the burden on the instructor. Only one submission had to be manually validated or added for every 16 submissions.