jestjs / jest

Delightful JavaScript Testing.
https://jestjs.io
MIT License
44.31k stars 6.47k forks source link

cache write race condition across processes #4444

Open bookman25 opened 7 years ago

bookman25 commented 7 years ago

Do you want to request a feature or report a bug? bug

What is the current behavior? When running tests in parallel with the new atomic cache writing, we're getting rename errors as multiple processes try to write to the same files. Even with --no-cache option set it's still hitting rename errors because it's still trying to write to the files.

What is the expected behavior?

  1. I think that --no-cache should not write cache files
  2. Caching across multiple processes should not collide, or should be able to restart the test.

Please provide your exact Jest configuration and mention your Jest, node, yarn/npm version and operating system.

{
    "clearMocks": true,
    "collectCoverageFrom": [
        "packages/**/src/**/*.{ts,tsx}",
        "!packages/sf-lint/**",
        "!**/*.d.ts"
    ],
    "coverageReporters": [
        "text-summary"
    ],
    "moduleFileExtensions": [
        "ts",
        "tsx",
        "js",
        "json"
    ],
    "setupTestFrameworkScriptFile": "<rootDir>/jestEnvironment.js",
    "transform": {
        "\\.(ts|tsx)$": "<rootDir>/scripts/preprocessTypescript.js",
        "\\.(less|css|svg|gif|png|jpg|jpeg)$": "<rootDir>/scripts/preprocessText.js"
    },
    "testRegex": "(Spec|.spec).tsx?$"
}

jest 21.0.1 node 6.9.2 yarn 0.27.x/1.0.0 OS Windows

thymikee commented 7 years ago

Isn't this related? https://github.com/facebook/jest/pull/4432

bookman25 commented 7 years ago

I don't believe so. I believe the case we see in our repo is the exact same file getting mocked for 2 different processes (while running in parallel) which causes the cache write operation to fail because one process has the file locked. That ticket looks like is more about different files with the same contents. We don't have any of those cases in the repositories we host that we ran into this issue.

cristiango commented 7 years ago

We basically run into the same issue with our tests. One easy way to reproduce was to remove jest cacheDirectory to force cache generation on the next run.



    jest: failed to cache transform results in: 
C:/myniceproject/src/jest-cache/jest-transform-cache-b2e8f1f700b9bd266a0d27bb01b47a2b-34a7e3d71d38ff01f65fdb5abdf5126b/3f/settingsProvider_3f1439e55275a95ecfdb7dcb432f7958
   Failure message: EPERM: operation not permitted, rename 
'C:\myniceproject\src\jest-cache\jest-transform-cache-b2e8f1f700b9bd266a0d27bb01b47a2b-34a7e3d71d38ff01f65fdb5abdf5126b\3f\settingsProvider_3f1439e55275a95ecfdb7dcb432f7958.1630729137' 
-> 
'C:\myniceproject\src\jest-cache\jest-transform-cache-b2e8f1f700b9bd266a0d27bb01b47a2b-34a7e3d71d38ff01f65fdb5abdf5126b\3f\settingsProvider_3f1439e55275a95ecfdb7dcb432f7958'`
DavidKuennen commented 7 years ago

Having the same issue and can't find a way around it. Jest is basically unusable for us like this.

vkrol commented 7 years ago

We are trying to update to 21.2.0 from 20.0.4 and now we have the following error on our build servers:

Test suite failed to run
[13:46:50]
[13:46:50]    jest: failed to cache transform results in: C:/TeamCity/buildAgent/temp/buildTmp/jest/jest-transform-cache-c60bb8ad55f1dbc29115038800657f2f-4895fc34da3cd9ad1e120af622bca745/3b/fe-selectors_3b56db772e798e2e4d0c9fc7c9e4a770
[13:46:50]    Failure message: EPERM: operation not permitted, rename '...\jest\jest-transform-cache-c60bb8ad55f1dbc29115038800657f2f-4895fc34da3cd9ad1e120af622bca745\3b\fe-selectors_3b56db772e798e2e4d0c9fc7c9e4a770.1701848979' -> '...\jest\jest-transform-cache-c60bb8ad55f1dbc29115038800657f2f-4895fc34da3cd9ad1e120af622bca745\3b\fe-selectors_3b56db772e798e2e4d0c9fc7c9e4a770'
[13:46:50]      
[13:46:50]      at Object.fs.renameSync (fs.js:772:18)
[13:46:50]      at Function.writeFileSync [as sync] (node_modules/write-file-atomic/index.js:192:8)
apphelpyulongs commented 7 years ago

I'm now having the same issue tests are broken randomly.

vkrol commented 7 years ago

If I run the tests with the --runInBand flag then as expected everything is OK.

asapach commented 7 years ago

I can see the same issue fairly consistently:

  ● Test suite failed to run

    jest: failed to cache transform results in: .../jest-transform-cache-...
    Failure message: EPERM: operation not permitted, rename '...' -> '...'
        at Error (native)

      at Object.fs.renameSync (fs.js:810:18)
      at Function.writeFileSync [as sync] (node_modules/write-file-atomic/index.js:192:8)

jest 21.2.1 node 6.11.1 OS Windows

--no-cache does not help and jest-transform-cache is still being written. The only thing that helps is --runInBand, which is hardly acceptable for large projects.

Anything we can do to help diagnose the issue? Should I create a repro case?

Is this error critical? Can it be treated as a warning rather than taking down the whole test suite? Is there a way to back off and retry?

SimenB commented 7 years ago

Having a small repro would be great

asapach commented 7 years ago

Here's the repro: https://github.com/asapach/jest-cache-issue/ It effectively runs lodash-es through babel-jest to populate the transform cache. This fails for me 80% of the time on two different machines (Win8.1 and Win10). If you remove --no-cache it fails 100% of the time. Adding --runInBand brings it down to 0%.

(Out of curiosity tried running it in WSL on Win10 and the issue is not reproducible using Posix API)

SimenB commented 7 years ago

Is this just happening on Windows? I don't have access to windows machines beyond virtual machines, so not the easiest to debug for me...

@jeanlauliac you added write-file-atomic in #4088, would you be able to help out?

vkrol commented 7 years ago

This problem is very similar to https://github.com/npm/write-file-atomic/issues/10 and https://github.com/npm/write-file-atomic/pull/22.

asapach commented 7 years ago

Just ran a procmon trace, here's an example of the issue:

Time of Day Process Name PID Operation Path Result Detail
16:54:43.2304011 node.exe 7624 SetRenameInformationFile ...\constant_ee286bbcf367680ce61a90e506522f92.82986661 SUCCESS ReplaceIfExists: True, FileName: ...\constant_ee286bbcf367680ce61a90e506522f92
16:54:43.2305499 node.exe 8208 SetRenameInformationFile ...\constant_ee286bbcf367680ce61a90e506522f92.103872574 ACCESS DENIED ReplaceIfExists: True, FileName: ...\constant_ee286bbcf367680ce61a90e506522f92

As you can see two processes are trying rename the same file within 1ms of each other and the second one fails.

I think npm/write-file-atomic#22 addresses the async version of writeFile(), but writeFileSync() is still affected.

SimenB commented 7 years ago

Would it be possible to create a repro showing that just using write-file-atomic in worker-farm against the same file fails somehow? Would be great to open an issue against that project, as I think that's where the fix should be.

Or if you could write a test within jest that shows the same error (we have appveyor CI) that could be a start as well?

I'm not even sure what behavior we want in case of this error. Retry the write? Rerun the test? The whole test file?

asapach commented 7 years ago

OK, I'll try to create another repro. Not sure it's possible to create a jest test, because it would require spawning multiple processes, disabling/cleaning the cache and keep running until it fails.

I'm not even sure what behavior we want in case of this error.

Well, firstly the issue should not even happen when --no-cache is on, since the cache should not be populated. Secondly, I'm not sure it's possible to retry the sync operation properly - is it possible to use writeFile() instead of writeFileSync()? That way write-file-atomic should retry automatically (I'll create a test to confirm).

SimenB commented 7 years ago

Well, firstly the issue should not even happen when --no-cache is on, since the cache should not be populated.

That's a good point, and should be fixed separately. That way --no-cache can at least be a workaround.

Secondly, I'm not sure it's possible to retry the sync operation properly - is it possible to use writeFile() instead of writeFileSync()?

@cpojer thoughts on making it not be sync? Not sure how that scales. Or if you have another idea on how to fix this

cpojer commented 7 years ago
asapach commented 7 years ago

Here's the other repro with worker-farm and write-file-atomic: https://github.com/asapach/write-atomic-issue

Findings so far: the sync version fails as expected, but surprisingly the async version fails as well. This means that they probably implement a retry queue only when it runs in the same process, which doesn't help in our case.

SimenB commented 7 years ago

I'd like to retain that.

New flag? It's a highly misleading name. And on e.g. CI you rarely want the cache anyways, so it's just wasted resources. Or is a cache generated within a single test run used during --no-cache, and it only ignores existent caches?

Here's the other repro with worker-farm and write-file-atomic

Awesome! Could you open up an issue against write-file-atomic? It feels like a fix should go there, and if not (they don't want to support multiple processes writing at once) we can revisit on our side. WDYT?

jwbay commented 7 years ago

A patch I tried locally that seemed to work is ignoring the error if it comes from trying to rename to a file with the same content. Since it just means another process 'won' the race, we can ignore it and move on.

const cacheWriteErrorSafeToIgnore = (
  e: Error,
  cachePath: Path,
  fileData: string,
) => {
  if (
    e.message &&
    e.message.indexOf('EPERM: operation not permitted, rename') > -1
  ) {
    try {
      const currentContent = fs.readFileSync(cachePath, 'utf8');
      return fileData === currentContent;
    } catch (e) {
    }
  }
  return false;
};
asapach commented 7 years ago

@SimenB, sure, I'll file an issue.

@cpojer, can this error be swallowed/ignored and treated as a warning? It implies that the file has already been written and no data should be lost.

asapach commented 7 years ago

Upstream issue: npm/write-file-atomic#28

jeanlauliac commented 7 years ago

I think this means "rename" is not an atomic operation on Windows, so it breaks the assumption made by write-file-atomic. Unless there's a flag that could be enabled at the Windows API level, this could mean it's impossible to have atomic writes/renames on Windows altogether.

@jwbay your solution looks reasonable to me! 👍 Instead of using indexOf however, I'd use e.code === 'EPERM' (more robust, doesn't depend on specific message). I don't think we should read the file again to check the value, because this could introduce additional concurrency issues (ex. if the file is being written by yet another process at the same time). Would you mind sending a PR, please?

davidjgoss commented 7 years ago

I was about to start work on a PR for write-file-atomic along the lines of "if we're asked to write a file sync but it's already in the queue to be written async, bail out" (maybe with an option to switch the behaviour on). But if we're happy to handle this at the Jest level, I won't hurry. cc @jeanlauliac

jeanlauliac commented 7 years ago

I was about to start work on a PR for write-file-atomic along the lines of "if we're asked to write a file sync but it's already in the queue to be written async, bail out" (maybe with an option to switch the behaviour on).

I think adding this logic (local queue) wouldn't fix the issue, because it happens mostly when different processes try to write to (rename to) the same file.

To fix concurrency issues once and for all, we may have to reconsider how we do caching, for example have a single process that access the cache, with which we communicate over some kind of IPC. Existing key/value store systems may be handy, such as memcached.

davidjgoss commented 7 years ago

I think adding this logic (local queue) wouldn't fix the issue, because it happens mostly when different processes try to write to (rename to) the same file.

Ah, I perhaps misunderstood the issue then. The way I read it, the library already has a queuing mechanism that works nicely for the async requests, but if you mix in sync requests as well you can get collisions.

mekwall commented 7 years ago

My above referenced pull request should solve this issue. At least it did it for me!

asapach commented 7 years ago

@mekwall, I think they are using rename() in the async version of writeFile(), and it still fails in my test: https://github.com/asapach/write-atomic-issue. Could you please try running my repro? I think your change might minimize likelihood of this problem happening, but does not eliminate it completely.

mekwall commented 7 years ago

@asapach Did you try with my changes? Because I tried several times, and I never got EPERM: operation not permitted, rename with my changes while getting it every time without.

asapach commented 7 years ago

@mekwall, yep, still failing with your changes (although async-ly). (Corrected below.)

asapach commented 7 years ago

Or rather technically it doesn't fail (because the sync flow is not interrupted), but the console is still littered with EPERM errors.

mekwall commented 7 years ago

@asapach I found the issue you're having. It's in the graceful-fs polyfill. I've posted a fix in this PR: https://github.com/isaacs/node-graceful-fs/pull/119

asapach commented 7 years ago

@mekwall, yes this does seem to address the issue - no more errors in both sync and async versions. The problem now is that temp files are not removed, because fs.unlinkSync(tmpfile) is never called: https://github.com/npm/write-file-atomic/pull/29/files#diff-168726dbe96b3ce427e7fedce31bb0bcR198

mekwall commented 7 years ago

@asapach I added unlink to graceful-fs rename, but I'm not sure if that's the right way to go. Afaik fs.rename uses the MoveFile function and that should not copy the source to the destination. The source should just change name and the source and destination should never exist at the same time.

asapach commented 7 years ago

@mekwall, that does help a bit, but in some cases if the worker is terminated early (because all the work is done), some files are not cleaned up, since it doesn't wait for the cleanup. The async version seems to work fine.

mekwall commented 7 years ago

@asapach It's not working as expected at all. I need to dive into the innards of node to figure out how it's actually working and what the intended behavior should be. I believe the whole point with graceful-fs is to have it work the same on every platform, so I'll dig deeper into that. At least we've found the culprit :)

mekwall commented 7 years ago

@asapach I realized that my PR for write-file-atomic wouldn't work, so I took another approach by adding fs.renameSync in graceful-fs with the same workarounds as fs.rename but blocking. This makes your test work just as expected!

asapach commented 7 years ago

@mekwall, Thanks, I've verified your changes on both of my repro cases and none of them fail. I think on the downside I see increased CPU and disk usage for sync, but it's probably expected.

SimenB commented 7 years ago

Thanks a lot folks for picking this up and helping to get it fixed. Much appreciated! ❤️ Hopefully the fix in graceful-fs is the correct one, and it gets released.

mekwall commented 7 years ago

@SimenB You're welcome! We're pained by this at work so I got some time to investigate this by my team. The changes will affect a lot of packages so it will most likely take time for them to accept it :/

manu-st commented 7 years ago

Any idea when this workaround will make it to a release?

samvloeberghs commented 7 years ago

@cpojer could you provide some more info why it's closed? is there a fix provided? We still have this issue

cpojer commented 7 years ago

Apologies, seems like the fix has not landed in graceful-fs yet :(

SimenB commented 6 years ago

Can multiple people confirm that using https://github.com/isaacs/node-graceful-fs/pull/119 fixes their issues?

You can use the fork by using yarn resolutions, see https://yarnpkg.com/en/docs/selective-version-resolutions, which should allow you to deploy the fix to CI etc.

e.g.

{
  "resolutions": {
    "graceful-fs": "mekwall/node-graceful-fs#a10aa576f771d7cf3dfaee523f2e02d6eb11a89f"
  }
}
mekwall commented 6 years ago

@SimenB It solves the issue for me, at least 😄

asapach commented 6 years ago

+1 For me as well.

mreishus commented 6 years ago

@SimenB Also fixed my issue and I'm now able to use jest 22 on windows. (We were stuck on 20 before this).

Edit: Actually, it worked on my dev laptop, but did not work on the build server. It's running yarn 1.2.1 though, maybe that's why?

[16:47:55][Step 5/8]     jest: failed to read cache file: D:/TeamCity/buildAgent2/temp/buildTmp/jest/jest-transform-cache-c39254d365b4bcb2c90f133d4b359d91-56a1804d8d831b3401a35a7e81857f3b/7e/rafPolyfill_7e7a83ed3f2680ba9aec0e45f59ade5d
[16:47:55][Step 5/8]     Failure message: EPERM: operation not permitted, open 'D:\TeamCity\buildAgent2\temp\buildTmp\jest\jest-transform-cache-c39254d365b4bcb2c90f133d4b359d91-56a1804d8d831b3401a35a7e81857f3b\7e\rafPolyfill_7e7a83ed3f2680ba9aec0e45f59ade5d'
[16:47:55][Step 5/8]       
[16:47:55][Step 5/8]       at readCacheFile (node_modules/jest-runtime/build/script_transformer.js:465:60)
SimenB commented 6 years ago

Yarn 1.0.0 should be enough, worth a try upgrading, though

manu-st commented 6 years ago

Just tried to put the resolution in but it is still failing for me. However I have both ENOENT and EPERM violations:

    jest: failed to read cache file: C:/Users/dev/AppData/Local/Temp/jest/jest-transform-cache-857f905b2da01d52a9d1d17b6772ea4a-3a91587e29d4fef23c6e0cf16b2f5679/7d/index_7d0afc82f0b29ec31c4b5f296cbdee74
    Failure message: ENOENT: no such file or directory, open 'C:\Users\dev\AppData\Local\Temp\jest\jest-transform-cache-857f905b2da01d52a9d1d17b6772ea4a-3a91587e29d4fef23c6e0cf16b2f5679\7d\index_7d0afc82f0b29ec31c4b5f296cbdee74'

      at Object.fs.openSync (../fs.js:653:18)
      at Object.fs.readFileSync (../fs.js:554:33)

and

    jest: failed to read cache file: C:/Users/dev/AppData/Local/Temp/jest/jest-transform-cache-857f905b2da01d52a9d1d17b6772ea4a-3a91587e29d4fef23c6e0cf16b2f5679/c4/std_pb_c403e6e7645c904896b66f44a3e43606
    Failure message: EPERM: operation not permitted, open 'C:\Users\dev\AppData\Local\Temp\jest\jest-transform-cache-857f905b2da01d52a9d1d17b6772ea4a-3a91587e29d4fef23c6e0cf16b2f5679\c4\std_pb_c403e6e7645c904896b66f44a3e43606'

      at Object.fs.openSync (../fs.js:653:18)
      at Object.fs.readFileSync (../fs.js:554:33)
mekwall commented 6 years ago

@mreishus Does your build server run Windows? Because the fixes in graceful-fs will only target Windows, but it shouldn't happen on a Linux-based OS.