jaydenseric / graphql-upload

Middleware and an Upload scalar to add support for GraphQL multipart requests (file uploads via queries and mutations) to various Node.js GraphQL servers.
https://npm.im/graphql-upload
MIT License
1.43k stars 131 forks source link

Unexpected behaviour when maxFileSize is excedeed #298

Closed silviacir closed 2 years ago

silviacir commented 2 years ago

I am using "graphql-upload": "^13.0.0" along with "apollo-server-express": "^3.1.2". I've put the following express middleware

app.use(graphqlUploadExpress({
    maxFileSize: GQL_UPLOAD_MAX_FILE_SIZE,
    maxFiles: 1
}));

With the following code as part of a mutation's resolver

import { ReadStream } from "fs-capacitor";
import { FileUpload, GraphQLUpload } from "graphql-upload";
...
new Promise(async (resolve, reject) => {
    try {
        const { createReadStream, filename, mimetype }: FileUpload = await file;
        const stream: ReadStream = createReadStream();
        stream.on("error", reject);
        stream.on("data", () => "");
        stream.on("end", () => {
            resolve({});
        });
    } catch (error) {
        reject(error);
    }
});

and noticed the following behaviour when a file exceeded the maxFileSize has been put: if GQL_UPLOAD_MAX_FILE_SIZE < 63000 KB (approximately) the error is thrown in createReadStream method and so caught in the catch branch. if GQL_UPLOAD_MAX_FILE_SIZE > 63000 KB (again, approximately) I need to catch the error in the listener of the error event.

The message is the same PayloadTooLargeError: File truncated as it exceeds the 63000 byte size limit. in both cases.

Any idea what could be the mistake?

jaydenseric commented 2 years ago

Multipart requests are streamed in chunks of data. If it can be determined the file size limit is exceeded from the very first chunk received, it errors early to avoid the overhead of setting up a stream for the sole purpose of emitting an error. This is the expected and tested behavior, but it can always be changed if something else makes better sense.

It's been years now since we came up with the current system so I'm too rusty to give a detailed technical justification for the way it currently works without a deep dive, but I have faint memories of questioning this exact behavior several times, looked at doing it another way, but then when I understood the problem better I realised it was good how it was.

jaydenseric commented 2 years ago

Closing because it's working as originally designed, but if people would like to add more feedback or suggest a different design feel free to continue the conversation here.