Open jrwashburn opened 3 months ago
Also have this file complaints for Inadequate Contribution automatically as well.
sent you google sheets access request
Originally, the requirement was to send complaints to the CRT if an ambassador failed to participate. In the last team call, the governance team proposed to change the bylaws in favor of automatic termination to avoid clogging the CRT agenda. Let's assume that will pass, and design this accordingly.
Currently, we assume we just have the current month's submissions and evaluations; however, for this purpose, we will need to have at least 6 months of history. I suggest we add a new link to a historical sheet, (similar to https://docs.google.com/spreadsheets/d/1cjhrqgc84HdS59eQJPsiNIPKbusHtp2j7dN55u-mKdc/edit?gid=1515736355#gid=1515736355 ) so that we can check the last 6 months of submissions. Unfortunately, this example link will not work because it does not have email address, which is what we need to key off of. We will need to discuss with Fradique to get the underlying data structure for the actual submissions and evaluations from the google forms that include the email address.
The format of that data will dictate how this will work, but the requirement is to look at the most recent 6 months of submissions, and accumulate 1 point for each month that an ambassador did not submit work for review. Then, check the last 6 months of reviews, and if in any month the ambassador did not score at least one of the assigned submissions, then accumulate an additional point for each month with 0 evaluations provided. If the total score is > 2 for any ambassador, they will be terminated from the program, and we could just send a termination notice to the Ambassador and copy the Sponsor. We could also send a warning email for any ambassador with a score of 1 - to let them know that if they have another failure to participate they may be automatically terminated from the program.
If we want to be precise with the warning, you would need to track the month of their first violation, and then let them know if they fail within 6 months of that first violation to accurately explain the sliding window.
I'm a bit concerned about a strange glitch that I think might exist somewhere in the current mechanism. I'll try to find it. Do you think it's possible that submitting a contrib report might not be counted if the ambassador filled out the form without being logged into a Google account?
The google forms go to a sheet automatically. I do not think that would/should happen. The google forms data automatically writes to a google sheet. If the Ambassador does not enter their email address, they may not be counted, but they don't have to be logged in to a google account. Email is a form field that is entered.
If each ambassador has to evaluate three other ambassadors, then it means that each ambassador receives three evaluations from peer reviewers. And so the numbers in Fradique's table should be the arithmetic mean of these three evaluations?
FullyAutomatic.drawio.pdf please check the diagram if the algorithm is correct
Please review the diagrams of the two slightly different algorithms.
comparing the answers received from the Submission form and recorded in the "Responses" sheet with the list of ambassadors from the Registry sheet (email column?) we take their Discord nicknames or emails? Discord nicknames seem to be more simple, but e-mail addresses should be more exact.....
Originally it was based on discord handle but we had many issues with typos, changes, etc. we switched to email and have not had problems since. I recommend sticking with the email implementation.
So, do we put the email addresses into the Review Log sheet after all, and create the entire matrix from the email addresses , right?
@mathematicw I prefer the second flow, but move the two big If it's been 7 days since...
checks into series instead of parallel, so that everything can happen in a single Processing Responses run. I would also rename it to Processing Evaluations so that it's more intuitive when it should be run.
have doubt. I remind you one function: Handling Form Responses: The evaluations are extracted from the Form Responses sheet. We need to: Identify who is evaluating whom based on the 'Discord handle of the ambassador you are evaluating' column in the form. Place evaluations in the correct columns based on the sequence they come in (1st, 2nd, or 3rd evaluation). If an evaluator doesn't respond, their email is placed in the grades column instead of the score. The question is, Isn't it better to not reveal emails of non-evaluated ambs-evaluators, but put their Discords in month-sheets, if this spreadsheet going to be publicly available?
However, I have long ago made everything exactly as it been discussed: if an ambassador-evaluator has not evaluated the ambassador-submitter assigned to him, his e-mail address will be displayed in the monthly list, instead evaluation. And, by the way, the matrix of matching submitter-evaluator has also been made long ago from discord handles :) But this logic is not difficult to change
We should not reveal ambassador emails. Not sure why the email is being placed there? Is that so you can track that they did not submit a response? You could do that from the evaluation responses sheet instead? If you use discord, need to think about handling if discord does not match the registry.
We should not have a problem today, we use email to track who send a submission, and email to track who responded with a score. The link from submission to score is on discordid, but that should be the same in both since we provide it to them in email. If you cannot match from a response to a submission on discord handle, there should be an error - I think alert sponsor to the discrepancy for manual review before continuing.
It would be nice to add one more feature -- in case an ambassador is not selected to review any submissions, we should email them confirming that they were not selected and are not expected to submit evaluations that period. (see #9)
Not sure why the email is being placed there? Is that so you can track that they did not submit a response?
Not I came up with it.
If you use discord, need to think about handling if discord does not match the registry.
If ambassador- evaluator specified discord handle of the submitter with a typo, that's a problem. An option is to create a function to brute force possible typos, but that seems difficult. But we can ask the ambassador to fill out the form again (edit the form). In general, editing should be allowed.
In general this isn't a problem. We have the Registry, which is a source of matching email addresses and discords of all ambassadors, so we can operate with both discords and emails, depending on preferences, as we need
You could do that from the evaluation responses sheet instead?
There are many options, and we can do it in various ways.
Though if we identify non-responders directly from the form, we won't be able to separate those who avoided evaluation (even though they were assigned submitters) from those who simply didn't have submitters because there were too few.
We know who was assigned, just need to keep track of it.
We can't directly take evaluators' e-mail addresses from the Evaluation Form (to not reveal em), nor take Discords as there are no in Evaluation form. Can write the string "didn't evaluate" for ex, But I suggest for clarity to write evaluators' Discord handles (converted from their Emails using Registry sheet) when they evaded evaluation. Still the the Month-sheets are designed for human monitoring. Not even a problem if they not relevant, or a typo.
It would be nice to add one more feature -- in case an ambassador is not selected to review any submissions, we should email them confirming that they were not selected and are not expected to submit evaluations that period. (see #9)
Those ambassadors who did not get submitter for evaluation, although are listed on the Registry, do not go into the sabmitter-evaluator matrix as evaluators. And in such cases, they should be notified of exemption from evaluation. They don't response to Evaluation Form, but will not be penalized.
Penalty points for non-evaluation will be issued only to those who are listed in the matrix as an evaluator but whose response is not received within 7 days from the date of sending the last letter.
i can create a pull request to show you the code and what stage it is at so far
The basic features are done so far, except for the final processing of scores, penalty points, CRT, and upcoming peer review notifications (which are just piece of cake compared to the former mentioned) . However, they need to be tested thoroughly. Additionally, Google has a limit on the number of emails sent per day, which is reached surprisingly quickly, even though it seems like it should be 500.
Generally, penalty points for not participating in the Submission or Evaluation processes can be (and should be) written only directly in the Overall score in the "Penalty Points" column, in corresponding to that evaluator row.
We want also count the "didn't submit" and "late submission" events (equally) from past months to get wider view and to be able to detect if there are ambassadors who have 3 and more penalty points within 6 month period.
Also it is possible to handle past periods... Although we can limit the scope of penalty points accrual and issuance to the most recent periods starting from a certain discussed date.
It raises an interesting thought: how are you going to run this algorithm that will find all past offenders, and it may turn out that even if someone is doing well in the last 6 months, there is still a 6-month period where the number of penalty points was greater than 3. What happens in such cases?
implementation note: Total amount of penalty points for current ambassador will be displayed in Overall score sheet's Penalty Points column. And yet for monitoring if there is 2.97* penalty points threshold within a contiguous 6 month period in this row, the new dedicated column is needed.
*2.97, because this is the minimum closest number to the threshold value, if it is accumulated by only 0.33 PP (multiplied on 9) for missed single Evaluation requests cases (assuming mentioned proposal).
P.s. sorry for the multiple edits, the section is not as obvious as it seemed.
Here are some problems that can take quite a bit of time if I work on them alone
We want to make the Overall score sheet a full-fledged dashboard (I want to make it this way). You already know the columns it has, but here's what else needs to be done:
In the Average Score column, we need to display the arithmetic average of all the scores.
In the Penalty Points column, we need to show the total number of penalty points.
In the Max 6-Month PP column, we need to show the maximum number of penalty points within any six-month period.
In the month columns, the following events can occur in the cells:
However, we cannot combine these string values with numeric values (the Final Score) in the same cells, as the Final Score is transferred from the month-sheet and needs to be in a numeric format.
Modify the Average Score formula to parse the cells in the month columns and extract the numbers, ignoring the text.
Use a color-based logic where the events are represented solely by color codes.
here are empty cells (where test ambs) but they "says" what happened by color. (Just idea, and in some cases it has advantages over combined strings)
Add menu item for Sponsor to automatically file complaints
Need a way to track which complaints have been filed and what period it is for - new sheet perhaps.
Send a complaint email to the Subject and to the current CRT via bcc. (Verify how this was done with Fradique.)
Also check ability to create the vote table via Coda API to automate everything -- only gap would be the response from the Subject. Discuss this more with Fradique re: operations.