HI! First of all, thanks for your amazing work on VPL! I'm currently using it in 3 of my courses, and it helps a lot.
Considering you already have similarity checking implemented, I'd like to ask for a new feature: Similarity Evaluation. As the name suggests, it would grade code based on its similarity to a set of reference files. There are 2 use cases for this:
Non-Programming Languages, like HTML and CSS. I could add some reference 10/10 implementations and the student grade would be its similarity to its closest reference code, according to the most beneficial metric;
One could combine it with test-based evaluation and automatically assign a grade of max(aA, bB), where A is the standard test-case-based evaluation grade, B is the similarity grade, 'a' and 'b' are values between 0 and 1, set by the teacher (default 1). In this way, even if a student misses all test cases, they could still be given some decent grade. In a recent extreme case, a student of mine got the entire thing wrong due to a single character (opening a file for _r_eading instead of _w_riting).
I know it could be done using custom evaluation scripts, but a "native" implementation would be super nice. Test-based could be left as the default evaluation mode, adding similarity and hybrid (and maybe others) as options.
HI! First of all, thanks for your amazing work on VPL! I'm currently using it in 3 of my courses, and it helps a lot.
Considering you already have similarity checking implemented, I'd like to ask for a new feature: Similarity Evaluation. As the name suggests, it would grade code based on its similarity to a set of reference files. There are 2 use cases for this:
I know it could be done using custom evaluation scripts, but a "native" implementation would be super nice. Test-based could be left as the default evaluation mode, adding similarity and hybrid (and maybe others) as options.