Open leblowl opened 7 months ago
I think it will be a bit more production-ready if we start the upload whenever the user arrives at the "add members" screen with the invite link, since that's roughly when the data needs to be freshly uploaded.
One question: how does uploaded invite data interact with previously-uploaded data?
Some options I can see for this stage:
...any other ideas?
We address these choices a bit in the doc. I think of these 1 or 2 seems like a better first step just to get things working.
Uploading data after entering 'add members' tab seems valid. Or something similar. We would have to add some level of restriction on the backend to prevent unnecessarily frequent uploads.
Afaik cid in invitation link is based on content that's being stored on QSS. So each upload -> different cid -> different invitation link.
I can see a problem with overriding previous data with new one each time owner generates new link:
With tiny community it's not a problem, with slightly bigger it would be very annoying.
Maybe we could store older data for X time (e.g for a day) and then automatically delete?
~~Here's what our RFC says: ~~
To invite users, clients gather essential community metadata (users, channels, keys, peer info) put it on the server with an expiration equal to the invite link expiration, get a link, and share it with the user they want to invite. "
So invite links don't necessarily include message data. Instead, encrypted message data is being uploaded gradually and clients can download all recent data from QSS.
propose that we leave out message data. If we do that, the end result of this work will be that users have all community metadata at the time the invite link was created, can see all channels, and can send messages to other users via the p2p network. That seems significant and easy to test. We can create a separate issue to make it clear that messages are loading.
For duplicate uploads, it says the invite link includes "the CID of the invite data". This means the CID will be different if the data has changed and it will be the same otherwise, since it's a hash. We can prevent re-upload if the file already exists.
@leblowl @EmiM how does this sound?
We could also include the most recent messages in each channel but cap it at something reasonable, to have a rough cap on upload size.
I think we should upload duplicate data. In future work we'll have a TTL so that it expires when the client wants it to. And if we want to let clients keep data on the server we can let them bump the TTL for something that has been previously uploaded.
We can add a flag to Quiet (--enable-qss-upload or whatever you'd like) and the flag could include the server URL (e.g. api.tryquiet.org). Then in connections-manager, we can upload all of the data described in https://github.com/TryQuiet/quiet/issues/2406 to the QSS server. We would need to retrieve the data from the various OrbitDB databases once those have been initialized/loaded.
When to upload is an open question. I suppose it's probably easy just to upload every x minutes using setTimeout. But I'm sure there are other ideas. Let's just keep it simple to start.