Closed Keyrxng closed 4 months ago
After you give this a look over let me know if this is the way forward of if the permit based approach would be better?
I think this with a bit more refinement would be the more comprehensive approach
I started a fresh bot with fresh DB and I do not have any permits. So when you said just parse the permits you meant scrape the claim urls from comments and decode the permit data from that?
I think that parsing the encoded permits from within the GitHub comments is fine. I consider basically all of those already transferred and credited for. I only invalidated a few, and its okay to give credit for them for now.
validating the non_payment repos I found one newer style comment that was missed so I'll account for that.
I'll try improve the debugging data as well
I appreciate you taking the time to check your work. You can mark this as ready to review when you resolve that.
Now pulling all permits: 448 tracked
Payments assigned to users: 434
Only format unaccounted for is:
0xcodercrane **[ [ CLAIM 500 ]** ]()for 0x00868BB3BA2B36316c2fc42E4aFB6D4246b77E46
but only seen here which looks to be old testing formats.
Root outputs: User UBQ Totals, All Payments, All Permits, All Repos w/o Payment, Decoded permits, Unmatched permits
I've caught every case I can find and done my best to attribute the right payment to the correct user with the correct reward origin
Validating using my own figures as before it appears to be much better
There are about 25ish attributions to "No Assignee" but can be verified easily enough, I'll be happy to do the rest of the manual reviewing once we know the data processing logic isn't changing again either as part of this spec or a new one whatever the case may be.
Still trying to improve it but it's there or there abouts atm
I don't think I can take this any further without input on it as is, naturally there will be some manual validation required regardless but if you've any ideas on how to further improve the validity lmk
Hey last thing but I'm hoping to see a workflow run can you add that so I can have it run in the GitHub action?
You just need to yarn add tsx and then yarn tsx the script name and it should work.
Is tsx a requirement if it's running fine with tsc? I can run the action I used before and output the files to the action logs but that doesn't use tsx.
The trouble is that tsx accepts some of the tsconfig props but not all, takes no custom config itself and throws silly type errors for packages specifically viem's utility types - info from here
If it's a requirement I'll see about a workaround
This doesn't sound right because I've been using tsx
for probably about a year now and never saw a situation where compiling is superior or necessary. I never had issues running packages etc. Although to be fair I never used it with viem
Perhaps you can take a little while to try and make it work as I believe it was listed in the requirements. In the meantime I'm going to need to get back to my computer to review your work to make sure that it checks out. I've been away from my computer these last few days so it's been hard to thoroughly review and approve things.
Unfortunately I'm the only one working on the review team of this particular project! Thanks for your patience.
I'll see what I can do with it then pav and I'll make the final push in an hour or so, should be a bit cleaner for you to review too
Good to go now, the gha will be deleted I assume as it's only for QA?
also the commitlint seems to be failing on the ts-template, I tried to solve it using the ubq-dollar repo config file and the example config they provide through the error, swapping out the package.json mention of it but it just kept repeatedly failing for me telling me the rules were empty and to set some.
Only noticed it after trying to commit while inside my original airdrop-cli repo after having brought in the template whereas before I had been working from this repo inside /airdrop-cli so none of the hooks were hitting
The problem I had with tsx was that I was calling it against a collection of functions without calling anything but calling it with just the IIFE is fine, lesson learned :))
I have to see the commit lint error you're talking about because I haven't experienced that.
I think we can keep the GitHub Action.
I started spot checking the CSVs and so far so good. I was a bit confused about the 25 permits that failed to parse in the CI log. There also seems to be a mismatch with the total amounts of payment permits issued and how many are on chain (it was like 350 on chain and like 450 in the logs if I remember correctly) I had to suddenly get off my computer am and posting this comment from a taxi.
I plan to look closer tonight.
Appreciate it Pav, I'm hoping to get this merged asap bit tight for cash atm lmao
I'll give things another once over later and see if I can improve the output any further. It's been a balancing act trying to capture everything and create useful datasets without swarming them with noise while collecting actionable debug data.
Once the CLI is complete and ready for use before the airdrop, is your vision for it to remove the need for manual validation all together or will results still be validated manually considering?
Can give the full review post merge given the delay and initial approval
lol i need to change the settings
Appreciate it Pav thank you
Why is it DEBUG and not parsing the correct amount? I assume DEBUG means multiple permits found on the page. Looks like there are mistakes:
Issue # | Amount | Repository | Currency | Payee | Type | URL | |||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
14 | 200 | devpool-directory-bounties | WXDAI | FernandVEYRIER | assignee | https://github.com/Ubiquity/devpool-directory-bounties/issues/14 | |||||||||||||||||||
12 | 0 | work.ubq.fi | DEBUG | FernandVEYRIER | assignee | https://github.com/Ubiquity/work.ubq.fi/issues/12 |
I realize now that I asked for addresses to be the identifiers instead of the user name. This is relevant because I jumbled up the addresses in the database at some point. Why dont you change the script to use addresses? Like I am pretty sure I mentioned in the specification, for the airdrop I only need addresses (GitHub username doesn't help me send an airdrop)
5 | 0 | ubiquibot-kernel | DEBUG | whilefoo | conversation | https://github.com/Ubiquity/ubiquibot-kernel/issues/5 |
---|
@Keyrxng I just finished a high level spot check. The only thing that really needs to be changed is to use addresses instead I think.
Resolves #95
Comment based parsing as opposed to permit based.
Accounting for the multiple formats going back as far as the start of 2023 I never went back any further thinking it would only be covering the last year.
See readme for usage instructions.
Originally I started with merged pull requests then grabbed the closing issue reference and worked backwards that way but after trying it from an all issues approach it felt like the results were better.
So it's no longer just tallying contributions from the last year as per the spec but all time payments?
QA