Open ThomasKranitsas opened 7 years ago
With a good rating system, we may:
We had a reviewer rating, and TC staff decided to remove it because it didn't work well. Copilots and PMs don't have time to review reviewers.
@lsentkiewicz My idea was to provide feedback (rating) only when there is a reason to give negative rating. Something like giving penalties to reviewers and not require PMs/copilots to review reviewers.
@lsentkiewicz Yeap, in situations where some competitors reach PM for re-appeals anyway, it won't be a big extra work to officially add some "penalty points" to reviewers for their mistakes (i.e. the re-appeals found to be valid).
If the reviewer gets "penalty points" he will try to appeal it, and it will create another discussion copilot vs. reviewer.
TC staff tried to implement a system that payments are related to the review quality. It was probably 6 or 8 years ago. They dropped it and decided to create a simple system.
btw This is not a new problem. Try to search for old topics in the forums. Use "review", "review quality" as a search query. Example: https://apps.topcoder.com/forums/?module=Thread&threadID=842927&start=0&mc=26#1970612
@lsentkiewicz
If the reviewer gets "penalty points" he will try to appeal it, and it will create another discussion copilot vs. reviewer.
What about a system, that such problematic cases are collected in some queue, and a selected jury of most experienced copilots/managers/members/reviewers reviews these cases with some periodicity? Their decisions won't have an effect on the outcomes of the contests affected by the cases (it would create a real mess, as the money have been paid already, and, probably, the winning code already have passed further into dev/production), but they will have impact on the reviewer rating, contributing to preventing similar issues in further. This will remove the overhead from specific contests (the copilot will have just file the case into the system), but will (i) range reviewers by the reliability; (ii) create a reference for reviewers; (iii) reduce the appellation hell, as it will be a group decision of reputed guys, so one have to accept it.
What do you think?
a selected jury of most experienced copilots/managers/members/reviewers
They won't work for free, and TC admin don't want to pay anything extra. They reduced 3 -> 2 reviewers to limit review costs.
They won't work for free, and TC admin don't want to pay anything extra.
Well, you get CAB, and hot discussions in Slack working for free, why won't it work for such jury? :) Also, budget matters, and whether it is worth spendings is a secondary question, what do you think about the idea itself, irrelevant of the costs?
Reviewing is a time-consuming task :)
Speaking hypothetically, what about something similar to the lyft/uber system?
The way they operate, riders and drivers rate each other 1-5 stars to ensure good quality. If a driver gets less than 3.5 stars or something they are then banned from being a driver on their app for a period of time. If I rate a driver 5 stars, it is optional for me to write any further comments. If I rate lower than 5 stars I am required to give a reason why. This would make it easier for the Topcoder Team as we wouldn't have to monitor every situation and every reviewer, rather the focus would be on the reviewer(s) that have fallen below the mark and investigate as to why that is and if it is fair.
@hokienick that's exactly what I had in mind 😄
@hokienick We had almost the same system. The copilot could rate a reviewer with Bad/Average/Good. The base rating was 100. Good reviewers had rating >110, but bad reviewers < 90. Reviewers were assigned based on some algorithm. Good reviewers were assigned more often than bad reviewers, but also there was a random factor.
Copilots didn't have time to analyze the scorecards, and in 90% of all reviews, they assigned 'Good' rate. 10% was re-appeals or very bad review.
I think it should be the submitter who should be given a chance to give feedback . And the co pilot can look at feedbacks that are below a certain threshold.
Ultimately if the competitors think that if the review is fair and good then only people will be motivated to take part in the challenges .
So if it's time vs attracting quality competitors in would choose the latter.
@lijulat then every submitter who won't be placed 1st/2nd (get paid) will give negative feedback :)
@ThomasKranitsas What you tell, is an universal objection about any feedback-rating system :-D And it is not quite true, if the process organised in the correct way, like:
Putting @dmessing comment from Slack here:
The idea of the old system was. Good but it did not produce worthwhile results. We need to update the algorithm and who can rate the reviewers
@hokienick and @dmessing In fact I wonder what is 'did not produce worthwhile' results.
CAB Meeting
Cardillo leads Reviewers Next quarter, they will set up goals for this issue. Will keep CAB in the loop
Any update or schedule for new rating system is introduced? 1 year past.
There were many discussions on Slack about bad reviews that lead to the win of an average or even worse a bad submission and in most cases re-appeals.
Many members asked for a rating system for reviewers.
Having a rating system for reviewers might not directly affect the choice of the reviewers for a challenge but it's a good reason for reviewers to perform better reviews. Potentially Topcoder could give some bonus $$ to reliable reviewers.
The rating of a reviewer could be a factor on the reviewer's payment formula thus a good rating will increase the payment but a bad rating will decrease the final payment.
What could be a reason to give a negative rate to a reviewer: