ducktors / turborepo-remote-cache

Open source implementation of the Turborepo custom remote cache server.
https://ducktors.github.io/turborepo-remote-cache/
MIT License
922 stars 88 forks source link

Payload too large #366

Closed PauloMesquitaSP closed 3 weeks ago

PauloMesquitaSP commented 1 month ago

šŸ› Bug Report

I deployed this into a vps with local storage provider to test but I'm getting 413 payload too large even setting body_limit to a really high number. The documentation doesn't specify the unit measurement so I don't know if it's bytes, kb, mb.

To Reproduce

node v20.13.0 pnpm v9.1.1

Steps to reproduce the behavior:

.env:

NODE_ENV=production
PORT=4444
TURBO_TOKEN=erased
STORAGE_PROVIDER=local
BODY_LIMIT=99999999999999999999999999999999999999999999999999999999999999999999999999

I tested BODY_LIMIT with multiple values before reaching this extreme. The build folder of the app I'm trying to cache have 10,7MB.

Dockerfile command: RUN echo "$PASS" | sudo -E -S pnpm build --filter=api...

Package.json script: "build": "turbo run build --api=\"https://subdomain.erased.dev\" --token=\"erased\"",

Error: WARNING failed to contact remote cache: Error making HTTP request: HTTP status client error (413 Payload Too Large) for url (https://subdomain.erased.dev/v8/artifacts/a45c1a3eaddad21e?slug=erased)

Expected behavior

I never used this before so I don't know what type of message I should receive but my expectations were to not have 413 as I grow the body limit number.

Your Environment

Machine I'm running docker: Linux Manjaro VPS: Amazon Linux

matteovivona commented 3 weeks ago

@PauloMesquitaSP in one of our servers we configured the value to BODY_LIMIT=1048576000 and we have never had any errors. try setting something similar and more realistic than 999999...

fox1t commented 3 weeks ago

We increased the default limit to 50mb. https://github.com/ducktors/turborepo-remote-cache/pull/384