Currently we upload to the API, which resizes items, determines their mime types & dimensions, saves all this to the DB, and then uploads the files to S3. This has disadvantages:
The user must wait for each item to be processed by the API, which introduces a delay.
Handling uploads on the API server is a bad idea in general, as it's resource intensive, and has negative consequences for config, i.e. the server must accept large request body sizes that it would be better to reject. (It also means we have to run a graphql server capable of uploads, which narrows the options somewhat/makes switching more complicated — see also https://github.com/keepthatworktoyourself/wombat/issues/125)
It'd be preferable to offload uploads to cloud infra.
We can do this by having the client request a pre-signed upload URL, then use this to perform this upload. A dedicated piece of infra (for now, the API) can munch through recent uploads to generate sized versions asynchronously.
Currently we upload to the API, which resizes items, determines their mime types & dimensions, saves all this to the DB, and then uploads the files to S3. This has disadvantages:
It'd be preferable to offload uploads to cloud infra.
We can do this by having the client request a pre-signed upload URL, then use this to perform this upload. A dedicated piece of infra (for now, the API) can munch through recent uploads to generate sized versions asynchronously.