Closed enjoylife closed 3 years ago
I've left this open until I had the time and mental energy to compose an adequate response. I had planned to go above and beyond by digging up all the technical reasons we abandoned similar ideas to replace files with unique IDs in the early research and development days, but as reasoned below I don't think such effort is required to close this issue. If you’re curious about the history, there is a fair bit to read in old PR's and issues in this repo and the graphql-upload
repo.
The first thing that comes to mind (although it's a pretty exotic) is that the current spec allows files to be used anywhere in the GraphQL operations objects, not just in variables
. Would that be supported by your proposal?
With your proposal that doesn't include a map of where files are used in the operations, it's not clear to me how one file variable used as an argument in multiple mutations in the same request could be implemented on the server. Is that something you have considered?
Performance wise the map allows the server to cheaply see up front how many files there are, and where they are expected to be used in the operation without having to parse the GraphQL query looking for certain upload scalars in an AST, etc. For example, the map makes implementing the maxFiles
setting in graphql-upload
trivial:
Simple alternative if you are not tied to the JS graphql ecosystem.
The point of this spec is to create a standard for interoperability between all GraphQL clients and servers, regardless of the languages or ecosystems. A spec that’s incompatible or overly burdensome to a JS browser implementation or a Node.js graphql
based server implementation (the most used and relevant GraphQL environments) immediately fails in this goal.
I welcome constructive criticism around this spec that is actionable, but it seems you’re proposing an alternative with conflicting goals. Please exercise caution as multiple different specs for multipart GraphQL requests would be an interoperability disaster for the GraphQL community, beyond the JS GraphQL ecosystem.
At this point, this spec and it's numerous implementations have matured, so not aligning with the greater community will have significant productivity costs to your team. You won't be able to use the nice Altair playground GUI to test file upload variables, you won't be able to easily swap out your GraphQL client or server implementation without rewriting the other end, etc. etc.
I just want to remind you that if you are using apollo or any of the heavy js graphql abstractions, this projects spec is good and correctly works within the limitations of node and the rest of the node based graphql libraries. But if your outside of that world, this simple uuid mapping like I have outlined above is a simple alternative.
This spec and the graphql-upload
JS server-side implementation are not tied in any way to Apollo, or a "heavy js graphql abstraction" - they are purposefully lightweight, generic, and compatible with most GraphQL clients and servers. I’ve dedicated my life to reducing complexity in the JS ecosystem and obsessively avoid heavy abstractions, creating new lightweight tools where none exist (e.g. graphql-api-koa
has only a 128kB install size, instead of apollo-server-koa
, which has a 14.4 MB install size).
Here is a breakdown of the graphql-upload@11.0.0
dependencies:
busboy
is responsible for parsing the multipart request stream, and is > 90% of the graphql-upload
install/bundle size; hopefully one day someone will publish a more modern, lightweight parser we can use instead. It would not be eliminated if we adopted your spec proposal, since either way a multipart request needs to be parsed. http-errors
is a shared dependency with Koa and Express, so if you have either installed already this dependency does not increase your node_modules
size. It is necessary to produce errors that result in appropriate HTTP response codes in the event of errors when processing the multipart request. It would not be eliminated if we adopted your spec proposal.fs-capacitor
is used to buffer file uploads to the filesystem and coordinate simultaneous reading and writing. This is necessary because a single variable representing a file upload can be used as input to multiple mutations in one request, and the multiple resolvers need to be able to process the file upload stream separately at the same time without interfering with each other. Your spec proposal would not allow this dependency to be removed.isobject
is a tiny dependency used for basic runtime type checking of things, such as checking if the GraphQL operation in JSON is an object, and not something unexpected. Your spec proposal would not allow this dependency to be removed.object-path
is the only dependency your spec proposal would allow us to remove. It only has a 47.9kB install size, and a 1.3kB bundle size. But, this would not be a net saving as for the alternative UUID approach you would need new dependencies, e.g. uuid
which has a 114kB install size and a 3.3kB bundle size. You might be able to shop around for a lighter UUID dependency, but we're splitting hairs. Node.js v15.6 introduced a new crypto.randomUUID
API, but it will be years before published software can make use of it. Note that creating an object path in the browser is generally the result of zero-dependency recursion/looping that can be done at the same time as you scan for file (e.g. see extract-files
source), but a UUID solution for the browser is for sure a dependency to add to the client bundle.Overally, it appears your proposed spec would not really decrease implementation complexity, and in some cases it might actually increase it.
Probably UUID is overkill; perhaps your proposal could be simplified to IDs that are only unique amongst the sibling multipart request field names.
instead of doing the funky null replacement which this spec defines, which is semantically questionable
Disagree that it's semantically questionable; please see https://github.com/jaydenseric/graphql-multipart-request-spec/issues/4#issuecomment-351225888 .
Further discussion here is welcome; closing because at this point I don't plan to action the suggestions raised in this issue in a major breaking spec change.
I'd like to reopen this discussion on @enjoylife's suggested simplification of the spec. I believe their suggestion can greatly simplify this specification and ease the implementation for typesafe frameworks while still achieving the original goals for JS GraphQL libraries.
Just for clarity, this is a rough draft of how I imagine the new version of the spec could look building off of what @enjoylife has defined:
- Requests are
multipart/form-data
- There is exactly 1 non-file form field. This contains the GraphQL request. The GraphQL server has a Scalar backed by a String that references the
name
of the related file form field.
- It doesn't need to be named
operations
anymore but it might be better to reserve this name for further extension- It doesn't need to be before the file form fields but it might be best to enforce this for performance
- We might not need to specify the name of our scalar but
Upload
seems good.- Every other form field should be a file with a unique
name
which will be referenced by theUpload
scalar.
ex:
curl http://localhost:4000/graphql \
-F gql='{ "query": "mutation { upload(files: [\"file_id\"]) }", "variables": null }' \
-F file_id=@a.txt
I've implemented a prototype of this in my own Scala server and I have a very rough implementation of this for Apollo:
Prototype Apollo Attachments Gist
Regarding the comments in #11 (describing how the map
field is necessary for performance), this is an implementation detail of the server and something we can solve for Apollo. If our Apollo plugin finds an Upload(file_id)
we grab a promise from our shared Uploads
object which we will resolve once we parse our file or reject after we finish parsing the entire request. This lets us execute our GraphQL request as soon as we find it in our form fields.
This is a trace from running my gist:
curl http://localhost:4000/graphql \
-F gql='{ "query": "mutation { upload(files: [\"file1\", \"file2\", \"file1\"]) }", "variables": null }' \
-F file1=@Untitled \
-F file2=@API\ Wizard.mp4
You can see that we've achieved the same async waterfall where our GraphQL request execution starts immediately.
The first thing that comes to mind (although it's a pretty exotic) is that the current spec allows files to be used anywhere in the GraphQL operations objects, not just in variables
Yes, each file is always referenced by its uid
so your server can choose to arrange its json however it desires without any issues.
An added benefit of this proposal over the current spec is the ability to define file references outside of variables. Right now you're required to always have a "variables"
section to reference via your map
form-field. It's not possible to send something like:
curl http://localhost:4000/graphql \
-F gql='{ "query": "mutation { upload(files: [\"file_id\", \"file_id\"]) }", "variables": null }' \
-F file_id=@a.txt
With your proposal that doesn't include a map of where files are used in the operations, it's not clear to me how one file variable used as an argument in multiple mutations in the same request could be implemented on the server. Is that something you have considered?
This doesn't really change between the current spec and this proposal. You're always looking up in your context for the file based on its uid. There's no reason you can't repeatedly query the same file based on its uid.
ex:
curl http://localhost:4000/graphql \
-F gql='{ "query": "mutation { a: upload(files: [\"file_id\"]) b: upload(files: [\"file_id\"]) }", "variables": null }' \
-F file_id=@Untitled.mov
Performance wise the map allows the server to cheaply see up front how many files there are, and where they are expected to be used in the operation without having to parse the GraphQL query looking for certain upload scalars in an AST, etc. For example, the map makes implementing the maxFiles setting in graphql-upload trivial.
Although this is true of the new spec change, we'll always be parsing GraphQL requests in GraphQL servers, it's a matter of leveraging the server libraries to facilitate this. This is something that could maybe be handled by an Apollo validationRule
or definitely by an Apollo plugin. We're writing a spec for GraphQL, we should be using the tools our GraphQL servers provide to us.
Even if we're writing an implementation of this spec for a framework that gives 0 options to validate our GraphQL request, the current JS spec implementation has already defined code that would catch maxFiles as they were streaming through via Busboy
: https://github.com/jaydenseric/graphql-upload/blob/2ee7685bd990260ee0981378496a8a5b90347fff/public/processRequest.js#L67
The point of this spec is to create a standard for interoperability between all GraphQL clients and servers, regardless of the languages or ecosystems
Exactly, this spec appears to be designed in order to run as a JS Server middleware. There is a good amount of indirection, implementation specific solutions, and dependencies on the implementing language/framework. This all creates more work for server implementers.
I did an audit of the various server implementations and all of the ones I looked at either depend on:
Object
type and casting (ignoring type safety)There doesn't seem to be a good way to add this specification to a typesafe language/framework without it devolving into the proposed spec change.
async-graphql
in Rust*variable = Value::String(format!("#__graphql_file__:{}", self.uploads.len() - 1));
we can see that internally, after the map
is parsed, we replace the null
inside the variables definition with a uid
to reference the specific file.
/// Get the upload value.
pub fn value(&self, ctx: &Context<'_>) -> std::io::Result<UploadValue> {
ctx.query_env.uploads[self.0].try_clone()
}
When we get the value out of the Scalar, we pull the actual file stream out of the context via that same uid
.
caliban
in Scala// If we are out of values then we are at the end of the path, so we need to replace this current node
// with a string node containing the file name
StringValue(name)
We're setting our variable value to the filename in order to pull it out of the context later.
sangria
in Scala(This isn't actually in the library but this gist describes how to implement it).
https://gist.github.com/dashared/474dc77beb67e00ed9da82ec653a6b05#file-graphqlaction-scala-L54
(GraphQLRequest(gql = gqlData, upload = Upload(mfd.file(mappedFile._1)), request = request))
we store our uploaded file separately from our GraphQL request.
https://gist.github.com/dashared/474dc77beb67e00ed9da82ec653a6b05#file-controller-scala-L68
userContext = SangriaContext(upload, maybeUser),
we pass our file through via the context.
https://gist.github.com/dashared/474dc77beb67e00ed9da82ec653a6b05#file-exampleschema-scala-L15
val maybeEggFile = sangriaContext.ctx.maybeUpload.file.map(file => Files.readAllBytes(file.ref))
Inside of our resolver we lookup the file in the context to actually use it.
All of these examples have implemented @enjoylife's proposal under the covers in order to preserve some form of type safety.
We can use these libraries as a guide to show us how to implement supporting both the current version of the spec and the proposed change in the same server with plenty of code sharing.
graphql-upload
in JavaScriptThis JavaScript implementation depends on the language being dynamic so that we can overwrite our variables with an Upload
instance.
operationsPath.set(path, map.get(fieldName));
We assign the Upload
instance to the specified location in our GraphQL Json Request
if (value instanceof Upload) return value.promise;
When parsing our Scalar value, we check and cast to make sure we found an Upload
instance.
graphql-java-servlet
in JavaThis Java implementation depends on the top level Object
type so that we can check and cast our variables on the fly.
objectPaths.forEach(objectPath -> VariableMapper.mapVariable(objectPath, variables, part));
We set each http.Part
in our variable map
if (input instanceof Part) {
return (Part) input;
When parsing our Scalar, we check and cast to make sure we found a http.Part
instance.
This spec and the graphql-upload JS server-side implementation are not tied in any way to Apollo, or a "heavy js graphql abstraction"
I can't speak for @enjoylife but I don't believe the proposed changes to this spec are implying the code for graphql-upload
is heavy. graphql-upload
is quite elegant in its implementation. In fact, for my Apollo prototype I borrowed heavily from graphql-upload
. The heavy parts are that:
"The point of this spec is to create a standard for interoperability between all GraphQL clients and servers, regardless of the languages or ecosystems" and the current iteration of this specification constrains non-dynamic languages in order to be written inside of a JS Server Middleware. Evolving this specification will better fit the growing GraphQL ecosystem and make this specification future proof so that everybody can benefit from the work you've done here.
I've opened #55 to continue the discussion
TL;DR If you are not using a server side graphql implementation with limitations on how scalars and other types should be "resolved", (such limitations arise in the apollo's js server see #11 and here) and your not tied to graphql-upload. Then I would not recommend using this spec verbatim. Instead just use uuid's to map between the schema and each body part.
The environment I'm assuming is on client side, we collect files from a
<input type="file" />
element providingFile
objects. We ultimately want to use theseFile
objects as variables within our request. Now instead of doing the funkynull
replacement which this spec defines, which is semantically questionable, just use the scalar to hold a uuid and use it when to constructing requests. High level steps:{variables: files:[ {uuid: 'xxx'} ]
Following such an approach creates a curl request along the lines of:
Then server side you simply read off the first part of the body which holds the query. Pretty much every graphql implementation gives you access when parsing the ast, and you can execute the schema as desired having read that first part. Now there you could either block and consume all the remaining parts, setting them aside in memory or temp files, which the resolvers would use as a lookup per each file variable uuid. Or you can read the parts as needed, streaming in the data, and have the resolvers, waiting in parallel for other resolvers to consume the body up to the needed part.
For additional clarity here’s an outline for given a Js client and Go server setup.
1 Set uuid's on the files to upload
Map the variables to satisfy the schema
{variables: files:[ {uuid: 'xxx'} ]
Fire off the request appending on the file(s) using the uuid as the multipart name
Server Side rough example:
In the resolvers....
If you made it thus far, I just want to remind you that if you are using apollo or any of the heavy js graphql abstractions, this projects spec is good and correctly works within the limitations of node and the rest of the node based graphql libraries. But if your outside of that world, this simple uuid mapping like I have outlined above is a simple alternative.