Open amy21206 opened 1 year ago
I think the current Parsons problems are also taking the length of the answer into consideration aside from the percent of blocks here, if I am looking at the correct part of the code for grading:
I think we can use the same formula but ignore the indentation part. I am also down to trying the formula on existing entries if needed. Please let me know how we want to move forward, thanks!
Hi Brad @bnmnetp, I talked to Barb and she said the percent calculation looks good. To make it easier in adding other content in "act", how about we use:
{
event: "hparsonsAnswer",
div_id: "hparsons_test", // div_id,
act: // json string of the following content
"{
scheme: "block", // "block" for block-based grading, "execution" for execution-based grading (unit test)
correct: "T", // "T" for correct, "F" for incorrect (same as Parsons directive)
answer: ["block 1", "block 2"], // List of blocks in student answer; no hash was used for horizontal Parsons since each block is relatively short
percent: 0.8, // Percentage of correction. For block-based (adapted from the percent grading for vertical Parsons problems): (length of longest increasing sequence/length of answer)*0.8 + (min(length of student answer, length of correct answer) / max (length of student answer, length of correct answer) * 0.2. For execution-based: Percent of correct unit tests.
}"
}
Shall I go ahead and implement this?
Thanks, Zihan
Yes, go ahead.
Dr. Barbara Ericson Assistant Professor, School of Information University of Michigan
On Sun, Jan 22, 2023 at 6:02 AM Zihan Wu @.***> wrote:
Hi Brad @bnmnetp https://github.com/bnmnetp, I talked to Barb and she said the percent calculation looks good. To make it easier in adding other content in "act", how about we use:
{ event: "hparsonsAnswer", div_id: "hparsons_test", // div_id, act: // json string of the following content "{ scheme: "block", // "block" for block-based grading, "execution" for execution-based grading (unit test) correct: "T", // "T" for correct, "F" for incorrect (same as Parsons directive) answer: ["block 1", "block 2"], // List of blocks in student answer; no hash was used for horizontal Parsons since each block is relatively short percent: 0.8, // Percentage of correction. For block-based (adapted from the percent grading for vertical Parsons problems): (length of longest increasing sequence/length of answer)0.8 + (min(length of student answer, length of correct answer) / max (length of student answer, length of correct answer) 0.2. For execution-based: Percent of correct unit tests. }" }
Shall I go ahead and implement this?
Thanks, Zihan
— Reply to this email directly, view it on GitHub https://github.com/RunestoneInteractive/rs/issues/23, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKOZ7M643TYHONBTR273HKTWTUHRDANCNFSM6AAAAAATPFEDYM . You are receiving this because you were mentioned.Message ID: @.*** com>
Hi, I'm moving the discussion in the email thread here to document the conversation and design decisions here. @bnmnetp @barbarer
Zihan wrote:
Hi Brad,
Hope you are enjoying the holiday =) I refactored the code for the original element and removed unused dependencies (Quill and other related libraries) and submitted a PR. For the grade logging scheme, I looked at the Parsons directory and wondering what you think about this scheme for grading horizontal parsons problems:
{ event: "hparsonsAnswer", div_id: "hparsons_test", // div_id, scheme: "block", // "block" for block-based grading, "execution" for execution-based grading (unit test) correct: "T", // "T" for correct, "F" for incorrect (same as Parsons directive) answer: ["block 1", "block 2"], // List of blocks in student answer; no hash was used for horizontal Parsons since each block is relatively short percent: 0.8, // Percentage of correction. For block-based (adapted from the percent grading for vertical Parsons problems): (length of longest increasing sequence/length of answer)0.8 + (min(length of student answer, length of correct answer) / max (length of student answer, length of correct answer) 0.2. For execution-based: Percent of correct unit tests. }
I want to try to keep it consistent with the other directives in Runestone so it is easier to manage later, so I have a few questions specifically for consistency:
Please let me know what you think of the scheme when you come back from the break, also if you prefer me to submit this type of discussion as issues through GitHub in the future =)
Thank you so much for your help!
Best, Zihan
Brad wrote:
Hi Zihan,
I prefer to have these discussions as openly as possible. Github issues are great for that. That way others who come later have easy access to the archive and can understand how decisions were made. That does not happen so easily when the discussion is private email.
I hope Barb will weigh in on the grading scheme. My thinking is to give what you have a try and see how it looks against the data we have.
I would prefer that we have one event for the grading rather than separate.
The act field in the database was always meant to be free of any particular structure. So putting a json string in there is actually a great way to do it. That makes it easy to add more stuff to act without having to worry about how it is going to get parsed.
Barb wrote:
We could grade the horizontal (micro) the same way as we grade the micro - the percent of blocks that are in the correct position.