jaydenseric / graphql-upload

Middleware and a scalar Upload to add support for GraphQL multipart requests (file uploads via queries and mutations) to various Node.js GraphQL servers.
https://npm.im/graphql-upload
MIT License
1.43k stars 132 forks source link

Help: Stream upload to S3 #275

Closed undefinitely closed 2 years ago

undefinitely commented 2 years ago

I'm wondering if anyone would be kind enough to point me in the right direction, been at it for a few hours now

What I think is pretty vanilla graphql-upload and s3 usage from their respective docs: (AWS SDK v3)

const { createReadStream, filename, mimetype, encoding } = await file;

const fileStream = createReadStream();

const data = await s3Client.send(
  new PutObjectCommand({
    Bucket: process.env.UPLOAD_BUCKET,
    Key: `delete-me`,
    Body: fileStream,
  })
);

await finished(fileStream);

return JSON.stringify({ filename, mimetype, encoding });

Raises: "A header you provided implies functionality that is not implemented" (501) Header: Transfer-Encoding

However...

const { createReadStream, filename, mimetype, encoding } = await file;

const fileStream = createReadStream();

const out = fs.createWriteStream('tmpfile');

fileStream.pipe(out);
await finished(fileStream);

const data = await s3Client.send(
  new PutObjectCommand({
    Bucket: process.env.UPLOAD_BUCKET,
    Key: `delete-me`,
    Body: fs.createReadStream('tmpfile'),
  })
);

return JSON.stringify({ filename, mimetype, encoding });

Works! But obviously writing it out again and reading back in is tragically slow. And if my understanding so far of graphql-upload's internals is correct, it is also hopelessly redundant.

I've tried all manner of naive work arounds using passthrough etc and inspired by other answers here but no matter what I do with the stream it always seems to lead back to the same Transfer-Encoding complaint.

If somebody could help me understand the difference between graphql-upload's createReadStream and node's fs.createReadStream and point me in the right direction I would be much appreciative.

Cheers

Edit: There is another person with what appears to be the same question on StackOverflow: https://stackoverflow.com/questions/69770931/node-js-aws-s3-sdk-v3-showing-error-while-putting-object

jaydenseric commented 2 years ago

I'm not sure off the top of my head the problem you are running into, but here is a copy paste of some of my past project code:

/**
 * Stores an image in S3.
 * @param {object} client AWS S3 client instance.
 * @param {object} options Options.
 * @param {string} options.id Image ID.
 * @param {ReadStream} options.stream Image read stream.
 * @param {string} options.mimetype Image MIME type.
 * @param {string} options.filename Image filename.
 */
async function storeS3Image(
  client,
  { id, stream, mimetype, filename }
) {
  try {
    await client
      .upload({
        Key: id,
        Body: stream,
        ContentType: mimetype,
        Metadata: { filename },
        ACL: 'public-read',
      })
      .promise();
  } catch ({ message }) {
    throw new Error(`Failed to store in S3 image ID “${id}”: ${message}`);
  }
}
const { createReadStream, mimetype, filename } = await imageUpload;
const stream = createReadStream();
const id = ''; // Create a unique ID here…

await storeS3Image(s3, { id, stream, mimetype, filename });

It looks a bit different to the code you shared. Is that helpful?

SergioSuarezDev commented 2 years ago

I'm using S3, i hope this code help you (part of my s3.service)

  constructor(
    @Inject(ENVFile.KEY)
    private configService: ConfigType<typeof ENVFile>
  ) {
    const s3 = new S3({
      region: this.configService.s3.region,
      secretAccessKey: this.configService.s3.secret_key,
      accessKeyId: this.configService.s3.public_key
    });
    this.s3Bucket = this.configService.s3.bucket;
    this.upload = promisify(s3.upload).bind(s3);
    this.deleteObject = promisify(s3.deleteObject).bind(s3);
    this.getObject = promisify(s3.getObject).bind(s3);
  }

  async uploadFile(file: any, name): Promise<string> | never {
    try {
      const { Location } = await this.upload({
        Bucket: this.s3Bucket,
        Key: name,
        Body: file.buffer,
        ACL: 'public-read',
        ContentType: file.mimetype
      });
      return Location;
    } catch (err) {
      throw new InternalServerErrorException({ code: 'S3_ERROR' });
    }
  }
undefinitely commented 2 years ago

No luck unfortunately. It may be something to do with my use of the newer AWS SDK. I'll continue digging into it over the weekend and report back if there's anything can be done to make working with s3 easier or if I've simply screwed up somewhere.

Thanks heaps for your input!

jaydenseric commented 2 years ago

Closing because this issue doesn't appear to be actionable on the graphql-upload side, but feel free to keep the conversation going.

trsdln commented 1 year ago

For anyone experiencing same issue: root cause of the problem is missing ContentLength, because new version aws-s3 cannot find out by graphql-upload's stream alone. Possible solutions:

  1. pass file size from frontend separately and then specify ContentLength at PutObjectCommand
  2. use @aws-sdk/lib-storage if you don't know file size at the time of s3Client.send call