Closed suedama1756 closed 4 years ago
That's quite surprising - the cache has an internal mutex store, so it shouldn't ever try to access the same file for writing twice at the same time within the same process (only when 2+ processes run).
I think we will need a consistent repro to understand what happens, although I understand it'll probably be difficult 😕 An alternative would be for you to dig in and find more information. To do that, you can use:
yarn set version from sources --no-minify
This will get you an unminified bundle which you'll then be able to edit with a bunch of console.log statements. The two functions that are most likely to be relevant are fetchPackageFromCache
(which has this internal mutex I mentioned) and lockPromise
(which is the actual FS mutex).
Hey @arcanis , thanks for getting back to me. I realised when I raised this it would be a tricky one to track down. One thing I've noticed is that when we setup yarn 2 our devops team changed the location of the cache to a shared tmp mount that potentially has a lot of contention on it (although not on the cache dir itself). I've moved the cache back to the old yarn 1.22 local mount this morning and have so far not seen any issues. I'll monitor it for a few days and let you know, of course this probably doesn't help us track down whether there is an issue, but it potentially allows me to continue with our yarn 2 adoption.
I'm closing this issue as it has gone away after moving the location of the cache. We have an ongoing issue with internal devops to move the cache back to the location which was causing issue (as it has more space), however, this is not likely to be turned around quickly. I'll re-raise at a later date if I have anything more concrete to go on.
was hoping to have eyes on this issue again. I reproduced this error yesterday:
➤ YN0001: │ Error: typescript@patch:typescript@npm%3A3.9.6#builtin<compat/typescript>::version=3.9.6&hash=8cac75: Couldn't acquire a lock in a reasonable time (via /root/.yarn/berry/cache/typescript-patch-9a6aa77d6f-2.zip.flock)
at s.lockPromise (/<redacted>/ui-platform/yarn/yarn-berry.js:10:106835)
at async f.writeFileWithLock (/<redacted>/ui-platform/yarn/yarn-berry.js:42:93986)
at async /<redacted>/ui-platform/yarn/yarn-berry.js:42:92867
at async /<redacted>/ui-platform/yarn/yarn-berry.js:42:94021
at async s.lockPromise (/<redacted>/ui-platform/yarn/yarn-berry.js:10:107015)
at async f.writeFileWithLock (/<redacted>/ui-platform/yarn/yarn-berry.js:42:93986)
at async I (/<redacted>/ui-platform/yarn/yarn-berry.js:42:92827)
at async /<redacted>/ui-platform/yarn/yarn-berry.js:42:93548
at async fetchPackageFromCache (/<redacted>/ui-platform/yarn/yarn-berry.js:42:93475)
at async t.PatchFetcher.fetch (/<redacted>/ui-platform/yarn/yarn-berry.js:56:56974)
section_end:1595539305:fetch_step
[0K➤ YN0000: └ Completed in 2.02m
➤ YN0000: Failed with errors in 2.06m
section_end:1595539306:fetch_step
[0K➤ YN0000: └ Completed in 2.07m
cd /<redacted>/ui-platform/projects/lib/prisma && yarn exited with code 1
section_end:1595539307:fetch_step
repro details:
yarn install
commands running via pLimit(os.cpu().length)
@theo-splunk I looked into it which yielded https://github.com/yarnpkg/berry/pull/1633, but i'm unable to reproduce the exact issue you have. Looking at the paths in your exception I can see you're on quite an old version, I'd suggest updating to the latest release (or master to get https://github.com/yarnpkg/berry/pull/1633) and see if that helps
ah, yes i should have included my yarn version it my original post. I edited that post with the yarn version included.
We will be upgrading to yarn 2.1.1 very soon. Will you let you know if we see this issue again. Thanks @merceyz !
The EISDIR issue was fixed in https://github.com/yarnpkg/berry/pull/1674, was finally able to reproduce this thanks to info provided by @n4bb12 in https://github.com/n4bb12/verdaccio-github-oauth-ui/pull/53#issuecomment-676666263
The Couldn't acquire a lock in a reasonable time
issue was just fixed in https://github.com/yarnpkg/berry/pull/3465
Hello, after upgrading to 3.0.2 it seems that the Couldn't acquire a lock in a reasonable time
issue still happen on CI. I can not reproduce locally though
The fix isn't in 3.0.2
, it was released in 3.1.0-rc.7
.
yarn set version canary
works! thanks
Describe the bug
Error: Couldn't acquire a lock in a reasonable time
To Reproduce
This error is intermittent but fairly frequent on our Jenkins CI system. As far as I can see there is no contention on the cache (only 1 agent is running). I have not as yet found any concrete steps to reproduce. I see the default timeout for this appears to be around 1 minute which should be plenty of time.
Screenshots
If applicable, add screenshots to help explain your problem.
Environment if relevant (please complete the following information):