Closed sgentile closed 2 years ago
answered my own question - but others might find this valuable:
https://www.npmjs.com/package/fs-capacitor
FS Capacitor creates its temporary files in the directory ideneified by os.tmpdir() and attempts to remove them:
after writeStream.destroy() has been called and all read streams are fully consumed or destroyed
before the process exits
Please do note that FS Capacitor does NOT release disk space as data is consumed, and therefore is not suitable for use with infinite streams or those larger than the filesystem
I wonder if graphql-upload could have some options that would enable not needing the temporary write with fs-capacitor ? Stream only option ?
Sorry for the late reply, this issue slipped through the cracks for some reason.
Early versions of graphql-upload
didn't use fs-capacitor
and were "stream only". We invented the disk buffer approach to allow file uploads to be used more freely in GraphQL variables like any other kind of variable. For example, a GraphQL variable that's an Upload
scalar can be used multiple times in arguments for several sibling mutations in one GraphQL request. If the upload was a naive single stream, then it will be a race condition which one of the resolvers will consume the stream, and the others would error.
graphql-upload-minimal
is a fork of graphql-upload
that removes the disk buffering, accepting the tradeoff that Upload
scalar can only be used in specific ways, which might not bother you.
For security we have setup readonly file system. When that is on we get the following error "A ReadStream cannot be created from a destroyed WriteStream."
This can be duplicated by ie. docker run -it --read-only. (or using docker-compose read_only true ).
it is possible to allow writes to a certain directory, ie. docker run -it --read-only --tmpfs /tmp alpine sh
My question is - what is the directory that graphqluploads is using to back the files so that we can setup the appropriate tmpfs ?
I do not write local in my code we create a readStream and then upload to s3 buckets directly