Closed admmasters closed 1 year ago
@admmasters We have close monitoring of performance for cache item response times which is an authentication wrapper around S3. I don't think this is a cache issue. Can you confirm by disabling the remote cache and let me know if the gap goes away?
Is the entire git repository present in Docker? If we don't have the git index we have to manually hash every single file. In a Docker environment file IO can be miserably slow, and those two thing combined would lead to precisely the experience you're describing.
Other things to try:
--no-daemon
... for CI you don't need it and setting up file watches is unnecessary. We may not be detecting that your environment is CI if it's particularly custom.Thanks @nathanhammond after coming back to Turbo post 1.9.7 I can confirm there has been a LOT of improvements. Now a cache hit is significantly quicker (with no extra config) 👏🏽
What version of Turborepo are you using?
1.9.1
What package manager are you using / does the bug impact?
pnpm
What operating system are you using?
Linux
Describe the Bug
When running any operation interacting with a remote cache there is a long pause ~50 seconds before the request is made to the cache. This does not occur locally, but only in our CI (Jenkins DIND).
Turning the verbosity of the logs up, ended up seeing there is a unexplained gap:
Is there anything we should know about when it comes to configuring turborepo? It feels like with verbose logs we could do with some more info to help debugging.
The cache hit/miss both suffer from this pause as well.
Expected Behavior
Cache hit is faster than 50 seconds.
To Reproduce
Turbo config:
TURBO_REMOTE_ONLY=true NODE_ENV=production pnpm build
Reproduction Repo
Unfortunately this doesn't exist