Open volodya-lombrozo opened 1 year ago
@yegor256 Please, take a look
@volodya-lombrozo in early days of Zerocracy we were counting messages posted by reviewers into PRs and then paying for them. This was the motivator for the reviewers to post as many messages as possible (like in your first example). However, we abandoned this practice and introduced the role of QA to projects. Read this policy, the section about QA: http://www.zerocracy.com/policy.html This QA person is supposed to catch situations where reviewers simply say "LGTM" and merge, and punish them.
How do you suggest improving the algorithm of this library?
@yegor256 I’ve not come up with the final “algorithm” yet, but we can discuss it here:
P.S.1 It is just a sketch
P.S.2 As for QA
, from the description of the position: it seems like QA job might have been done by a robot (3 point of my "algorithm").
@yegor256 what do think?
@rultor release, tag is 0.0.48
@rultor release, tag is
0.0.48
@yegor256 OK, I will release it now. Please check the progress here
@rultor release, tag is
0.0.48
@yegor256 Done! FYI, the full log is here (took me 3min)
@volodya-lombrozo let's see how will this edition work
@volodya-lombrozo I agree that reviews are unequal among themselves, and should be considered differently. Nevertheless, in my opinion it is important that the average review continues to give approximately the same points as now (150-200). That is, a large qualitative review gave large, low-quality gave less, but on average the quantitative meaning of the review was preserved. Maybe the resulting amount should be normalized by the sum of all the team reviews, or take a closer look at the proposed coefficients
@levBagryansky
coefficients
do you mean? How are you going to calculate them?@volodya-lombrozo
For example, let’s take 10 points for “suggested changes” and “5” for “approved” event, then I will get 45 points, you will get 20 points.
@levBagryansky I didn't clearly understand your (3) point, to be honest, but I think we still have to give more points for an average PR comparing to the review attached to the PR. For example for an average PR the picture should be the next (I get "coefficients" from my head - I don't know the exact numbers):
1) PR "I've done something" - author gets 150 Points 2) Simple review to that PR with several comments - reviewer gets 50 points 3) Author makes some changes - author gets 0 Points 4) Reviewer accepts the PR - reviewer gets 10 points
In summary: Author gets 150 points, Reviewer gets 60 points
@volodya-lombrozo yes
@rultor release, tag is 0.0.49
Let's consider the next PR: https://github.com/objectionary/eo/pull/2264 where I provided 4 rounds of comprehensive review. In the final statistics this PR will counted only as a single review. Then let's take another PR review: https://github.com/objectionary/eo/pull/2189 where I left only few comments and spent much less time for problem identifying comparing with the first PR.
So, I believe it is extremely demotivating statistics. It either motivates you to ignore PR reviews because they don't give reasonable number of points or motivates you just say "Looks good to me" without careful review. Both actions lead to the poor code quality in my opinion.