Closed DefenderOfBasic closed 4 weeks ago
we keep stripped archives in supabase s3 storage
I like the idea of supporting this even simpler version of serving the archive - just wanna make clear that
@TheExGenesis i think for now if we just make the stripped archives we already have publicly accessible, I can try this thing of processing them & mirroring them on cloudflare / doing some minimal post-processing, and that can serve as the snapshot of the "raw" data. (even without the cloudflare part, just having the supabase s3 storage archives public satisfies the minimal requirement of giving the public access to the archive dataset)
Looks like you just need to (1) make the bucket public (2) get its id/url and document it in api-doc.md
this is the other reason I think it would be nice if the archive uploading & exporting pieces were even more minimal, to support use cases like this: https://github.com/TheExGenesis/community-archive/issues/73. The app could have a config mode where the DB is optional (but that may be unnecessarily complex)
I've just learned that the data is already publicly available in object storage! For reference, here's an example, all of someone's tweets in a big JSON:
I like this idea, but I have an issue with having my entire posting history accessible publicly in clear text. Like, I wouldn't want OpenAI/anthropic to scrape my archive.
Can we have some form of access-control over the files? If we can generate a key for each user that created an account on community-archive and use that to access the files I think that would be a nice solution.
@ri72miieop yeah making the data available only to the people you want vs fully public/no auth is something I've been thinking a lot about. Got some initial notes/discussion here: https://github.com/TheExGenesis/community-archive/issues/10. I've been thinking of it as: (1) they've already scraped our public data, they have these huge datasets internally, but we the public do not. so we're leveling the playing field for each other (2) the growth model could be clusters of self hosted archives so we can try different policies. One could be fully open, one could be accessible only to its members, invite only (and you'd be putting your trust into those organizers not to share it in the future). Maybe some of these clusters can merge as they build trust.
(3) alternatively, you could not share your data with anyone but still use all the analytics and tools, if the apps are remote storage compatible (https://remotestorage.io/), basically keep the data offline/on your own server and the apps pull from it.
(4) maybe you share derivatives of your data, not the full thing, like you keep the raw tweets but share the embeddings, or share your top tweets (see "filter tweets before uploading" https://github.com/TheExGenesis/community-archive/issues/14 )
@TheExGenesis this first one should be easy right? The "change the upload directory" one:
/storage/v1/object/public/archives/<user_id>/tweets.json
asking because I wanna try doing a "client-side search" so I can have real-time regex search on all my tweets, and I wanna do it in a way that it can be using data directly from the archive (but I don't know the object storage directly for my/arbitrary user's right now). This might be an easy initial milestone?
actually, the code looks like it should already be this way? except archiveId
contains a timestamp?
i think we can get very far with just this. It's not hard to get account IDs, and if I know an account ID I can get this user's tweets, and I can write a super simple tutorial for even people that have very little coding experience to explore & visualize & build with this data (@Kubbaj this would be a great project for you, can be done in plain HTML/JS while you're learning)
@DefenderOfBasic should we close this issue and make a new one called something like "make storage schema more ergonomic"?
@TheExGenesis sure, yeah instead of a generic "make it more ergonomic" we should open specific issues for specific small tasks as needed.
I'm adding in one last note here, this suggestion by @brentbaum https://quickwit.io/
It looks like it's a way to "have our cake & eat it". If we didn't want to pay for & scale a DB to like, a billion tweets, this looks like a way to enable archive wide search while having the data be stored only in object storage (I assume that's what Brent meant about "help with scaling", like cost wise)
Goals
We want to be able to host tweet data & make it available to users as cheaply as possible. The cheapest way to do this is for people to access the raw data directly off of the S3 bucket storage.
To Do
❓ right now it's
FriedKielbasa_2024-08-27T03:47:21.000Z.json
instead oftweets.json
, but in the original archive it's justtweets.json
, right? Since it's prefixed with user id we can keep the original filenames?❓ is this a good idea? it may be a bottleneck, multiple people uploading at the same time? we could skip this and just return it from a DB query
OR could update the index like once an hour or something so it's a single process.
OR, there is a way to list all files in a bucket in supabase?
This is also very nice because users can regularly fetch the archive/just get subsets of it. The timestamps in the index can help you know when data has updated so you can only fetch what is necessary?
Optional
Worth noting: there is an option to do "requestor pays" model, where we don't even pay for hosting, this is if we REALLY needed to do it with no money. (don't think this is necessary now)
Old proposal
Instead of uploading everything directly from the client to a supabase/postgres DB, we could upload it (after stripping DM's & user's email) to cloudflare/s3. - this would solve how to make automated DB snapshots available (https://github.com/TheExGenesis/community-archive/issues/59) they'd already be available with a simple HTTP GET request / in your browser directly - can have a post upload job that processes the archives & converts them to flat files ``` s3://myuser/tweet_chunk.0 s3://myuser/tweet_chunk.1 ... s3://myuser/tweet_chunk.n ``` - extremely cheap to host, no DB/servers/compute required. Very cache & CDN friendly. Even for hundreds of gigabytes. (this is a pretty standard way to make public data available like, satellite imagery) - having the original data would also be nice in case use cases come up later for updating the DB schema or changing how we store it https://github.com/TheExGenesis/community-archive/issues/70 - could also allow users with deleted/banned accounts to upload this way (just create a directory for them like `s3://user-anon-upload