Closed Clashsoft closed 2 years ago
- ===== General ===== (-5P)
.?/Max-points
should be highlighted if Post feedback to GitHub
is clicked
grading
and repo
in a new tab. It could be useful adding a new checkbox Always add in new tab
. I'm unsure about this one, because i am not sure how i will adjust my workflow with fulib.org in the following assignments. Nice Tool! It feels much faster and shows me a better overview
Code Search
only after grading is doneSave & Close
on "We value your privacy!" made me think sth brokeString name;
could be found in both Game
and Player
. I had to select class Game { String name;
to make it unambiguous. This does not work if there is more code in between. - #249Open in GitHub
was really useful to check Commits (VSCode needs extensions for that)int n = scanner.nextInt();
) tripped up Code Search a little. Should have configured glob pattern to avoid this problem. In the end the effect on grades was negligible because everyone got this right.Points Total
, Times Evaluated
and % of All Solutions
, it should show and sort by Average % of Points
, calculated as Points Total / (Task Points * Number of Solutions with this Evaluation)
. This will work for positive as well as negative task points (deductions). Perhaps the other statistics could be shown after expanding the task row (like an accordion). In addition, the new space could be used to show an overview of Remarks and how often they were used.Solution by ...
is useless if only Student ID is known.-10P
making it 0/10P
in total. If autograde detects something wrong in a subtask it will also grade (for example) -1P
. This results in a negative score in the homework: -1/10P
. I think removing the possibility for negative scores would be a better solution, since negative scores shouldn't be possible in a homework.In this assignment were a lot of user errors while grading.
Users will make errors for sure. And there is just so much you can do about it. The tool works as intended nothing to argue with that, but the ones grading have to be conscious about the dangers of blindly trusting code search.
The things i mentioned are just problems that no software can fix without double checking through other people therefore needing more time to grade something.
Maybe a little preview what codesearch would find with the selected codesnippet could help the graders to validate their input before making it "public".
The assignment seemed promising for Code Search thanks to the fine-grained tasks, but user errors due to insufficient snippet context made it necessary to double-check code found by Code Search.
-0P
in the feedback. I believe that is more clear than 0P
, because the latter might imply the parent task was failed whilst -0
clearly indicates that a comment was made with negative consequences.The Code Search summary preview helped a lot for selecting snippets with sufficient context.
+
(???), not even the one result that should always be from the repo in which it was selected.
Please collect comments, feedback, opinions here.
Template: