Harems-io / Rarity-tools-clone

We can build our own.
6 stars 9 forks source link

Finish child pages and API connection. #5

Open 01101000011011110110010100001010 opened 3 years ago

01101000011011110110010100001010 commented 3 years ago

Continued work.

developerfred commented 3 years ago

@01101000011011110110010100001010 Where can I find the specifications to continue this work?

crazydazy8 commented 3 years ago

I'm not sure what to do.can you please be more spacefic.

gitcoinbot commented 3 years ago

@nresh Hello from Gitcoin Core - are you still working on this issue? Please submit a WIP PR or comment back within the next 3 days or you will be removed from this ticket and it will be returned to an ‘Open’ status. Please let us know if you have questions!

Funders only: Snooze warnings for 1 day | 3 days | 5 days | 10 days | 100 days

nresh commented 3 years ago

just an update - I should have a PR in the next couple days at least with the caching/filtering done. sorry I've been swamped with a couple other pressing things that have devoured my time the past couple days.

nresh commented 3 years ago

@01101000011011110110010100001010 I have a PR that implements loading from the cache using the upstash (upstash.com) redis service: #7

basically the first time the page loads it runs thru like 50,000 collections (taking 2-3 minutes) through a series of delayed API calls (to avoid throttling), and then filters them based on seven day volume and name, and then adds the remaining collections to the cache.

the next time the page loads, it loads directly from the cache, very quickly.

since I've already populated the cache, it will now always load from cache until the cache is emptied.

right now there are about 115 or so collections in the cache (out of the 50,000 that were pulled) that meet the filtering criteria. if that number grows to about 500, I will need to store multiple cache objects in redis, each with at most 500 collections, since we can only request about 500 collections at a time to meet upstash's request size limit for their redis service.

note that the caching won't work until you add the REDIS_URL environment variable to your netlify environment

also, to get this to really work, I need to create a script that would essentially refresh the cache daily by going through all of the collections on open sea and systematically filtering them. the logic for that is already in the current PR, but I need to export it to live in a serverlelss function that could be scheduled to run every day.

in any case, before we get that serverless function that does daily refreshes of the cache to work, I can start on the child pages.

gitcoinbot commented 3 years ago

Issue Status: 1. Open 2. Started 3. Submitted 4. Done


Work for 0.0682 ETH (224.41 USD @ $3205.81/ETH) has been submitted by: