Closed brianleect closed 5 months ago
WOW, this is an epic contribution @brianleect 🙏
Is this ready for PR review? I know we've been chatting over in discord about the importance of separating labels to separate files. If all labels are in one file, you cannot split it properly and therefore have massive bundle sizes.
Thanks again! Excited to join forces here 🎉
Noticed a bug. Some labels apparently are empty. Not sure if its caused by scraping too quickly?
Wrote a quick script to check. Apparently 186 labels impacted. I'll try to see if re-running the scraper fixes the problem or introducing a delay.
Fixed the empty labels. Seems there's also some weird issue going on with label scraping where inconsistent labels are getting scraped. Had occasion where I ended up with ~370 labels scraping all and ended up managing to scrape up to 400 labels total on a second run.
Might need to test if we are getting consistent number of labels back from labelcloud and if so, might have an issue elsewhere.
Thanks for the comments on all this @brianleect 🙏
I'll take a look soon. I appreciate the patience, I was offline a lot for EthMexico where I competed 🙌
We've got a big refactor underway already which replaces the need for SeleniumJS. Thank you for this issue @brianleect, we've decided on a different path that's working well for now! 🙏
Flow
node scrape-all
for all labels ornode scrape-all labelName
for single label retrievaletherscan
labelcloud
label.json
insrc/mainnet/all-json
which are filtered outignore_list
labels which are hardcoded in for beingtoo large
(100k+ labels) orbugged
(no values)filteredLabels
and save each label tosrc/mainnet/all-json
as${label}.json