Closed tsauvine closed 7 years ago
Could you explain this part again?
In the backend, there should be a way to request the ratings given to a specific user regarding a specific exercise. That is, take all the reviews a user has conducted within an exercise, and aggregate all the ratings given to those reviews. We ultimately want to calculate an average of those.
From my understanding, within an exercise, a user might give multiple rating to different reviews. You want to calculate the average of all his/her review. Where do you want to show this aggregation? Who do you want to show it to?
Let's say in Exercise 1 User A creates Reviews X, Y, Z, and those receive ratings. Somewhere there should be method such as get_ratings(User A, Exercise 1) which returns the ratings so that an average can be calculated. The use case is the quality of the conducted peer reviews affects grading, so we need a way to download the ratings. Perhaps they could be included in the result list.
Could you take a look at #72?
done
@tsauvine I have been thinking that as for anonymity, it would be safest to hide name of everyone's. It is also simpler that way. Let me know what you think.
@tsauvine I am thinking, how useful is rating of a review. I notice that in Amazon, you give a rating together with your review, rather than giving rating to reviews. Then, if someone find your review useful, they can give a thumbs-up (for e.g. "5 people find this review useful").
closed by Kha
The goal is that a student receiving peer feedback can rate its helpfulness.
Because a Review can be read by multiple students in case of group work, we probably need a new model for this, let's call it ReviewRating. It should store
In the 'show review' view, if the viewer is one of the group members, there should be buttons in the end for rating the Review. A student can only give one rating (but can change it later). Keep in mind that a review may be visible for users who are not members of the group whose submission was reviewed, and the buttons should only be visible for group members.
For now, the rating can be hard-coded to a scale of 0..2 (can be e.g. stars). Later we need to think of a way to specify the scale. This would probably be part of the rubric.
In the backend, there should be a way to request the ratings given to a specific user regarding a specific exercise. That is, take all the reviews a user has conducted within an exercise, and aggregate all the ratings given to those reviews. We ultimately want to calculate an average of those.