Open samwillis opened 1 month ago
Built bundles:
🚀 Deployed on https://6705845cda92212e1566b5e9--pglite.netlify.app
Could this work naturally with S3 / an S3 compatible file store?
@thruflo yes, for a read path it's trivial. For a write path it would be possible to also make work. But still single connection.
Although this is "read only", it actually maintains an "overlay" of writes on each file. So it's really a snapshot loaded over http, with in-memory writes. It just doesn't write back to the server.
Can we imagine a read/write FS to S3-like storage and in an env where workers (multi-thread) is not allowed ?
Export a database from PGlite with
dumpDataDir
, with this is now includes anindex.json
as a file listing. Untar to a web server, use theHttpFs
vfs, and point it at the dir on the server:fetchGranularity
is self explanatory, when set to"file"
it will download the hole file when it is first read,"page"
downloads individual file pages using a http range header.Has support for:
XMLHttpRequest
withxhr.responseType = 'arraybuffer'
, this is only available in a web workerDemo site: https://pglite-httpfs-demo.netlify.app
https://github.com/user-attachments/assets/5e9c8224-1899-4be5-8559-f642892bdebc