ubiquity / ubiquibot

Putting the 'A' in 'DAO'
https://github.com/marketplace/ubiquibot
MIT License
17 stars 61 forks source link

`/query` @organization #721

Open 0x4007 opened 1 year ago

0x4007 commented 1 year ago

Context

The query command is useful to pull up information on a specific DevPool contributor.

A useful statistic to eventually monitor is the performance of an organization for paying out bounties. For example, when a new bounty hunter wants to decide if they can trust an organization before picking up their first task, they should be able to query the organization for its statistics.

Draft I - "Lag Time"

I think that we can use the time label as a point of reference (this is a very brittle starting point though) for how much delay an organization has for processing pull requests and merging them in.

For example: it is 1 day task, and the organization takes 3 days to merge it in after the pull request is ready for review (on the latest status update of it being ready for review)

The organization has a lag time of 300%

Perhaps we can have an organization leaderboard for their reputation. When querying their information, we can calculate their position on the organization performance leaderboard based on the median lag time (I'm avoiding average because some weird edge cases might jump their positions on the leaderboards unpredictably)

Tasks:

  1. Query for org basic support. This can return a table with no data since we don't have the database side ready. The table should say something like "review lag time" and be a float, in the above example it would be 3. The next row should be rank and an integer.

Future tasks:

  1. Calculate review lag time for every closed issue and save in the database. This will be used to calculate the median.
  2. When an issue is closed, the bot should comment the a modified version of the /query table. This one would show a diff for the lag time and the ranking.
- | Review lag time | 2.5   |
- | Rank            | 55  |
+ | Review lag time | 3.1   |
+ | Rank            | 63  |

Draft II - "Average Dollars Rewarded Per Days of Review"

Example Calculations

Let's say $1000 bounty and turnaround in 1 day: 1000 / 1 = 1000 $100 bounty, 1 day: 100 / 1 = 100 $100 reward, 10 days: 100 / 10 = 10

Obviously it would be in new bounty hunters' best interest to work with organizations that have a high "Average Dollars Rewarded Per Days of Review" stat.

This would be useful to be able to query the repository and organization-wide statistics, because there may be different review teams on different repositories.

0x4007 commented 1 year ago

I will feel more confident if we can improve the lag time calculation strategy. Time label seems brittle but it also incentivizes orgs to be "fair" about time estimates.

More complex tasks will have a larger time estimate, and perhaps that will reflect in a longer review period.

Other metrics to consider is the timestamp delta of when the assignee requests a review, and when the first responder from the organization posts a conclusive review (request changes or approval)

We can do an average for that per pull request. This is in case more changes need to be done and the review needs to be redone.

Ideally when the assignee signals that they made changes (commits) they should use the "re-request review" function on GitHub. Unfortunately I've seen that they tend to just tag the last reviewer asking for them to recheck, which can be difficult for the bot to interpret without spending money on ChatGPT credits.


Each strategy seems to be loaded with assumptions which is bad for accuracy and needs refinement before this can be funded as a bounty.