Harems-io / Rarity-tools-clone

We can build our own.
6 stars 9 forks source link

Connect API and Start Child pages #3

Open 01101000011011110110010100001010 opened 3 years ago

01101000011011110110010100001010 commented 3 years ago

(A) the child pages with info about particular collections (if we want to implement those now) or (B) I can work on gettting the OpenSea integration to work so we have real data

gitcoinbot commented 3 years ago

@nresh Hello from Gitcoin Core - are you still working on this issue? Please submit a WIP PR or comment back within the next 3 days or you will be removed from this ticket and it will be returned to an ‘Open’ status. Please let us know if you have questions!

Funders only: Snooze warnings for 1 day | 3 days | 5 days | 10 days | 100 days

nresh commented 3 years ago

@01101000011011110110010100001010 just an update here to satisfy gitcoin - I should have the API connection piece ready in a new PR tonight

nresh commented 3 years ago

@01101000011011110110010100001010 I have a PR #4 that implements data pulling from open sea. it currently only pulls 1500 collection records in about 3 seconds because of the api request limit.

you can see this new version of the site here: https://harems-io-rarity-tools-35p5xne40-nresh.vercel.app/ (remember it takes a bit long to load, cause I'm literally sleeping for 3 seconds to get the data from the api)

I will be working on implementing a caching system (likely using Redis) for the api responses that would allow us to pull all the records once and intelligently refresh the cache.

I'm specifically looking at using this https://upstash.com/. it has a free tier of 10,000 commands/day. it should work fine with nextjs + netlify

feel free to merge the current PR #4, or we can wait until I get the caching working.

01101000011011110110010100001010 commented 3 years ago

@nresh I can merge now. I need some styling done to the mint js page could you help with that next. I have the function part working just need it to look pretty?

nresh commented 3 years ago

@01101000011011110110010100001010 for sure I can help with that as well, do you have designs by any chance? the more concrete the visuals, the easier/quicker it will be to adjust the styling.

btw, update on the caching of the open sea responses: I got upstash + redis to work in terms of creating the cache and then pulling data from it if it exists.

however, I was running into another snag in terms of api limits with upstash. basically they only allow you to do requests for no more than 1MB of data at a time. that equated to data for about 5-600 collections (there are at least 50000 that can be queried).

I realized though, that (1) if I just filtered to only those collections that have a non-zero total volume, then that dramatically decreases how much data needs to be stored, since there are tons of collections that have no volume and were either tests or duds and are generally useless (open sea really needs to enrich this api with better endpoints and have a better way of filtering for useful stuff for external apps 🤦‍♂️ ) (2) also, I can take out all the fields that we don't currently use before caching (there are many, including btw a "description" field that we were looking for before and can be used in the New Collections area at the top!)

after doing both of those, I think I can get all the data we want in a single cache response that's under 1MB.

now if page load times become an issue from pulling all this data in advance, we can: (1) just cache the data for all the lists that are currently shown on page load (i.e. new collections area, top 10 lists, the various sorted lists shown in the All Collections table at the bottom). (2) as far as searching for particular collections by name goes (both in the header and at the top of the table) - we can cache the data for each collection by its url slug. and we can also have a list of all the slugs for all the collections. so when the user searches for something, we compare the search query against the slug list, get data for the relevant slug(s) from redis cache and populate the table with them.

so I think we have a path forward - lmk if you have any thoughts, suggestions.

but I'm ok with working on another project if it's more urgent. I thought I'd just lay out my current thinking on this stuff

01101000011011110110010100001010 commented 3 years ago

@nresh I'm trying to pay this bounty for you and start the next task.

nresh commented 3 years ago

ah ok, I'll submit and stop work on the current one, thanks. we can revisit the caching/child pages later