autonomys-ambassadors / ambassador-os-peer-review

Automation for Ambassador OS Peer Review Process
Apache License 2.0
1 stars 1 forks source link

Automatically file complaints for Failure to Participate in Peer Review #8

Open jrwashburn opened 3 months ago

jrwashburn commented 3 months ago

Add menu item for Sponsor to automatically file complaints

Need a way to track which complaints have been filed and what period it is for - new sheet perhaps.

Send a complaint email to the Subject and to the current CRT via bcc. (Verify how this was done with Fradique.)

Also check ability to create the vote table via Coda API to automate everything -- only gap would be the response from the Subject. Discuss this more with Fradique re: operations.

jrwashburn commented 3 months ago

Also have this file complaints for Inadequate Contribution automatically as well.

mathematicw commented 2 months ago

sent you google sheets access request

jrwashburn commented 2 months ago

Originally, the requirement was to send complaints to the CRT if an ambassador failed to participate. In the last team call, the governance team proposed to change the bylaws in favor of automatic termination to avoid clogging the CRT agenda. Let's assume that will pass, and design this accordingly.

Currently, we assume we just have the current month's submissions and evaluations; however, for this purpose, we will need to have at least 6 months of history. I suggest we add a new link to a historical sheet, (similar to https://docs.google.com/spreadsheets/d/1cjhrqgc84HdS59eQJPsiNIPKbusHtp2j7dN55u-mKdc/edit?gid=1515736355#gid=1515736355 ) so that we can check the last 6 months of submissions. Unfortunately, this example link will not work because it does not have email address, which is what we need to key off of. We will need to discuss with Fradique to get the underlying data structure for the actual submissions and evaluations from the google forms that include the email address.

The format of that data will dictate how this will work, but the requirement is to look at the most recent 6 months of submissions, and accumulate 1 point for each month that an ambassador did not submit work for review. Then, check the last 6 months of reviews, and if in any month the ambassador did not score at least one of the assigned submissions, then accumulate an additional point for each month with 0 evaluations provided. If the total score is > 2 for any ambassador, they will be terminated from the program, and we could just send a termination notice to the Ambassador and copy the Sponsor. We could also send a warning email for any ambassador with a score of 1 - to let them know that if they have another failure to participate they may be automatically terminated from the program.

If we want to be precise with the warning, you would need to track the month of their first violation, and then let them know if they fail within 6 months of that first violation to accurately explain the sliding window.

mathematicw commented 2 months ago

I'm a bit concerned about a strange glitch that I think might exist somewhere in the current mechanism. I'll try to find it. Do you think it's possible that submitting a contrib report might not be counted if the ambassador filled out the form without being logged into a Google account?

jrwashburn commented 2 months ago

The google forms go to a sheet automatically. I do not think that would/should happen. The google forms data automatically writes to a google sheet. If the Ambassador does not enter their email address, they may not be counted, but they don't have to be logged in to a google account. Email is a form field that is entered.

mathematicw commented 2 months ago

If each ambassador has to evaluate three other ambassadors, then it means that each ambassador receives three evaluations from peer reviewers. And so the numbers in Fradique's table should be the arithmetic mean of these three evaluations?

mathematicw commented 1 month ago

FullyAutomatic.drawio.pdf please check the diagram if the algorithm is correct

mathematicw commented 1 month ago

Including PP Assignment pp_separating Please review the diagrams of the two slightly different algorithms.

mathematicw commented 1 month ago

comparing the answers received from the Submission form and recorded in the "Responses" sheet with the list of ambassadors from the Registry sheet (email column?) we take their Discord nicknames or emails? Discord nicknames seem to be more simple, but e-mail addresses should be more exact.....

jrwashburn commented 1 month ago

Originally it was based on discord handle but we had many issues with typos, changes, etc. we switched to email and have not had problems since. I recommend sticking with the email implementation.

mathematicw commented 1 month ago

So, do we put the email addresses into the Review Log sheet after all, and create the entire matrix from the email addresses , right?

jrwashburn commented 3 weeks ago

@mathematicw I prefer the second flow, but move the two big If it's been 7 days since... checks into series instead of parallel, so that everything can happen in a single Processing Responses run. I would also rename it to Processing Evaluations so that it's more intuitive when it should be run.

mathematicw commented 3 weeks ago

have doubt. I remind you one function: Handling Form Responses: The evaluations are extracted from the Form Responses sheet. We need to: Identify who is evaluating whom based on the 'Discord handle of the ambassador you are evaluating' column in the form. Place evaluations in the correct columns based on the sequence they come in (1st, 2nd, or 3rd evaluation). If an evaluator doesn't respond, their email is placed in the grades column instead of the score. The question is, Isn't it better to not reveal emails of non-evaluated ambs-evaluators, but put their Discords in month-sheets, if this spreadsheet going to be publicly available?

mathematicw commented 3 weeks ago

However, I have long ago made everything exactly as it been discussed: if an ambassador-evaluator has not evaluated the ambassador-submitter assigned to him, his e-mail address will be displayed in the monthly list, instead evaluation. And, by the way, the matrix of matching submitter-evaluator has also been made long ago from discord handles :) But this logic is not difficult to change

jrwashburn commented 3 weeks ago

We should not reveal ambassador emails. Not sure why the email is being placed there? Is that so you can track that they did not submit a response? You could do that from the evaluation responses sheet instead? If you use discord, need to think about handling if discord does not match the registry.

We should not have a problem today, we use email to track who send a submission, and email to track who responded with a score. The link from submission to score is on discordid, but that should be the same in both since we provide it to them in email. If you cannot match from a response to a submission on discord handle, there should be an error - I think alert sponsor to the discrepancy for manual review before continuing.

jrwashburn commented 3 weeks ago

It would be nice to add one more feature -- in case an ambassador is not selected to review any submissions, we should email them confirming that they were not selected and are not expected to submit evaluations that period. (see #9)

mathematicw commented 3 weeks ago

Not sure why the email is being placed there? Is that so you can track that they did not submit a response?

Not I came up with it.

If you use discord, need to think about handling if discord does not match the registry.

If ambassador- evaluator specified discord handle of the submitter with a typo, that's a problem. An option is to create a function to brute force possible typos, but that seems difficult. But we can ask the ambassador to fill out the form again (edit the form). In general, editing should be allowed.

In general this isn't a problem. We have the Registry, which is a source of matching email addresses and discords of all ambassadors, so we can operate with both discords and emails, depending on preferences, as we need

mathematicw commented 3 weeks ago

You could do that from the evaluation responses sheet instead?

There are many options, and we can do it in various ways.

mathematicw commented 3 weeks ago

Though if we identify non-responders directly from the form, we won't be able to separate those who avoided evaluation (even though they were assigned submitters) from those who simply didn't have submitters because there were too few.

jrwashburn commented 3 weeks ago

We know who was assigned, just need to keep track of it.

mathematicw commented 3 weeks ago

We can't directly take evaluators' e-mail addresses from the Evaluation Form (to not reveal em), nor take Discords as there are no in Evaluation form. Can write the string "didn't evaluate" for ex, But I suggest for clarity to write evaluators' Discord handles (converted from their Emails using Registry sheet) when they evaded evaluation. Still the the Month-sheets are designed for human monitoring. Not even a problem if they not relevant, or a typo.

mathematicw commented 3 weeks ago

It would be nice to add one more feature -- in case an ambassador is not selected to review any submissions, we should email them confirming that they were not selected and are not expected to submit evaluations that period. (see #9)

Those ambassadors who did not get submitter for evaluation, although are listed on the Registry, do not go into the sabmitter-evaluator matrix as evaluators. And in such cases, they should be notified of exemption from evaluation. They don't response to Evaluation Form, but will not be penalized.

Penalty points for non-evaluation will be issued only to those who are listed in the matrix as an evaluator but whose response is not received within 7 days from the date of sending the last letter.

mathematicw commented 6 days ago

i can create a pull request to show you the code and what stage it is at so far

mathematicw commented 6 days ago

The basic features are done so far, except for the final processing of scores, penalty points, CRT, and upcoming peer review notifications (which are just piece of cake compared to the former mentioned) . However, they need to be tested thoroughly. Additionally, Google has a limit on the number of emails sent per day, which is reached surprisingly quickly, even though it seems like it should be 500.

mathematicw commented 2 days ago

Generally, penalty points for not participating in the Submission or Evaluation processes can be (and should be) written only directly in the Overall score in the "Penalty Points" column, in corresponding to that evaluator row.

  1. Penalty points for not participating in the Submissions can be calculated by comparing the lists of ambassadors on the form response sheet (within 7 day time frame) and the month sheet (which has the same list as the Review Log or review matrix). So that, one missed Submission Form should lead to imposing one penalty point, adding it to already existing penalty points in "Penalty Points" column in Overall score.

We want also count the "didn't submit" and "late submission" events (equally) from past months to get wider view and to be able to detect if there are ambassadors who have 3 and more penalty points within 6 month period.

  1. Penalty points for not participating in the Evaluations could be calculated by counting events, when there is no evaluation in at least one of three grades-fields on the month sheet, so that this field ambassador-evaluator is given 1/3 of a penalty point (according to my proposal) for each such case, with it added to already existing penalty points in "Penalty Points" column on the Overall score sheet.

Also it is possible to handle past periods... Although we can limit the scope of penalty points accrual and issuance to the most recent periods starting from a certain discussed date.

It raises an interesting thought: how are you going to run this algorithm that will find all past offenders, and it may turn out that even if someone is doing well in the last 6 months, there is still a 6-month period where the number of penalty points was greater than 3. What happens in such cases?


implementation note: Total amount of penalty points for current ambassador will be displayed in Overall score sheet's Penalty Points column. And yet for monitoring if there is 2.97* penalty points threshold within a contiguous 6 month period in this row, the new dedicated column is needed.

*2.97, because this is the minimum closest number to the threshold value, if it is accumulated by only 0.33 PP (multiplied on 9) for missed single Evaluation requests cases (assuming mentioned proposal).

P.s. sorry for the multiple edits, the section is not as obvious as it seemed.

mathematicw commented 13 hours ago

Here are some problems that can take quite a bit of time if I work on them alone


What we have:

We want to make the Overall score sheet a full-fledged dashboard (I want to make it this way). You already know the columns it has, but here's what else needs to be done:

The Problem:

In the month columns, the following events can occur in the cells:

However, we cannot combine these string values with numeric values (the Final Score) in the same cells, as the Final Score is transferred from the month-sheet and needs to be in a numeric format.

Possible Solutions:

  1. Modify the Average Score formula to parse the cells in the month columns and extract the numbers, ignoring the text.

    • This would make the sheet both informative and visually clear (especially with color coding), but it would require more computations and increase the risk of bugs.
  2. Use a color-based logic where the events are represented solely by color codes.

    • These colors would be recognized by other functions, and penalty points (PP) would be assigned based on this color scheme. The analysis over longer periods would also be based on these color codes. The examples in the attached files do not cover all the possible options I have listed, but give a rough idea. concatenated markers Color logic- looking
mathematicw commented 13 hours ago

colored logic (to text) here are empty cells (where test ambs) but they "says" what happened by color. (Just idea, and in some cases it has advantages over combined strings)