Closed Gzing closed 6 years ago
Is pretty straight forward to implement and seems like the right way to go, just wanted to check with everyone. OR we could not store the IPFS image data on the indexing server at all, and let the frontend get the images directly from IPFS.
Realize we already have Sendgrid and Twilio dependencies, but trying not to add more third-party dependencies than absolutely necessary so folks can easily run their own bridge servers without needing to signup for too many third-party services.
Feels like the images should probably be stored on IPFS and the listing JSON should include an array of IPFS hashes instead of the raw image data.
I came across this as well when setting up the bridge server.
Where should the image IPFS hashes be stored?
We'll be storing the image hashes in the listing's IPFS data. We can later build something to send along the images from the query server as a part of the results, so we don't pay a latency price.
So, an alternative to uploading the images to IPFS, then writing their hashes into the listing and uploading listing, might be adding everything into a folder and then uploading the folder.
I'm unsure if this can be done through a web client, although one benefit of this alternative implementation is that we don't essentially have X different objects around IPFS, just one folder to represent a listing.
I'm late to this party and just catching up.
data:
, dweb:
, or ipfs:
. The latter two are ways of referencing IPFS hashes, so an image URI would be something like dweb:/ipfs/QmWP28bNAJbkiKrXHAHzotKCvLyNragErycSYQQR9KiFby
or ipfs:/QmWP28bNAJbkiKrXHAHzotKCvLyNragErycSYQQR9KiFby
. Note that this means the listing JSON doesn't change at all, except for the default protocol used in the image fields: used be data
, now ipfs
. @Gzing: what sort of performance degradation are we talking about? Would it be acceptable for just the current test listings that we created with the images stored as data:/
URIs?
@ambertch: The "additional roundtrip to fetch the images" is not such a bad thing; it's the way HTML references images after all. The actual image data is really only needed by the browser, which is already does it this way.
EDIT: @ambertch : Great idea to upload all listing data as a "folder" if such a thing is possible from web client.
dweb
or ipfs
are OK.
data
URI's are heavy, and current IPFS blobs weight ~1.5MB's each due to the images.
If we are loading 10 test listing from bridge server with that kind of data, it's a single response/request cycle for building up the list and downloading ~10-15MB's of data.
Might be OK for now, if we are planning to just use these in the list detail endpoints.
I've started working on this - just wanted to confirm a few things and make sure I haven't missed any discussions on this. As I understand this is what we want:
dweb:
or ipfs:
URIs referencing themdata:
URIs in listings.origin-js
as described in PR #216 implements filtering to only allow dweb:
, ipfs:
and data:
URIs. This function will also rewrite dweb:
and ipfs:
URIs to use the gateway configured for origin-js
. This will supersede the referenced PR.origin-js
will filter anything that isn't one of the URIs mentioned above, but in the future we might allow http:
and/or other protocols.Sound good? :smiley_cat:
That sounds exactly right, @tomlinton ! I could never get a clear read on whether dweb:
or ipfs:
was preferred, but we might as well handle both.
We should make a parallel issue for modifying the listing creation process to store the images in ipfs, and then dweb:
/ipfs:
URIs in the listing json. (Or did you want to cover this also in this issue/PR)
Thanks @wanderingstan! I'll go with ipfs:
by default but handle both. I'll cover the changes to creation in the same PR.
While pulling in IPFS data and keying that in DB was working fine, there is a performance degradation on the API side due to heavy image data in the payloads. Can we offload the images to S3 and save url's into the db?