Open bregenspan opened 1 month ago
Thank you! I was just pulling my hair out over this exact problem. Removing workerIdleMemoryLimit
made runInBand
work as expected again.
Hmm, seems like a bug to me that runInBand
isn't respected when passing workerIdleMemoryLimit
🤔 there shouldn't be any worker to limit the memory of.
EDIT: Right, that's what you said!
Solving this via updating docs is one answer. Alternatively, one thought is to update the fix that was added in #13846 to only apply to the
--maxWorkers=1
case, but not to--runInBand
, which could be reverted to its previous behavior of running tests in the parent process (can PR this if it sounds like an acceptable answer).
Yes please, that makes sense 👍
Another option is to add a warning when both
--runInBand
/--maxWorkers=1
and--workerIdleMemoryLimit
are set (restricting it to the case where Node received the--inspect
or--inspect-brk
flag, if this is possible).
Printing a warning probably makes sense when both runInBand
and workerIdleMemoryLimit
is passed? Just saying that the memory limit will be ignored due to no spawning of workers
(And yet another option is to simply wait until only Node 20+ is supported, and remove
--workerIdleMemoryLimit
then.)
Yeah, that's the plan. But it's a bit into the future. They even backported to node 18 (https://nodejs.org/en/blog/release/v18.20.0#vm-fix-v8-compilation-cache-support-for-vmscript), but still - next release of Jest will support Node 16 still
Looking into it a little more, the fix in https://github.com/jestjs/jest/pull/13846 is not to blame; the new logic was not supposed to apply to runInBand: true
case, but globalConfig.runInBand
is always undefined
even when --runInBand
is passed to the CLI.
@SimenB your fix in https://github.com/jestjs/jest/pull/14578/files appears to resolve this issue as well (issue no longer reproduces if I patch in the fix)! So, I think this issue can be closed out as soon as a new release is cut. Apologies, I thought I tested with latest HEAD but must've not been linking in an updated jest-config
.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 30 days.
Edit: see comments below, it looks like this issue will be fixed via an update to jest-config made in https://github.com/jestjs/jest/pull/14578/files (once a new version is released)
Page(s)
https://jestjs.io/docs/troubleshooting#tests-are-failing-and-you-dont-know-why https://jestjs.io/docs/cli#--runinband
Description
It would be helpful to note that, when
--workerIdleMemoryLimit
is set,--runInBand
does not "Run all tests serially in the current process, rather than creating a worker pool of child processes that run tests" as stated in https://jestjs.io/docs/cli#--runinband. Instead, it creates a single-worker pool. Due to this, the Troubleshooting instructions in https://jestjs.io/docs/troubleshooting#tests-are-failing-and-you-dont-know-why are incomplete; it's necessary to unset--workerIdleMemoryLimit
if one is set before attempting to use the Node.js debugger.Solving this via updating docs is one answer. Alternatively, one thought is to update the fix that was added in https://github.com/jestjs/jest/pull/13846 to only apply to the
--maxWorkers=1
case, but not to--runInBand
, which could be reverted to its previous behavior of running tests in the parent process (can PR this if it sounds like an acceptable answer). Another option is to add a warning when both--runInBand
/--maxWorkers=1
and--workerIdleMemoryLimit
are set (restricting it to the case where Node received the--inspect
or--inspect-brk
flag, if this is possible). (And yet another option is to simply wait until only Node 20+ is supported, and remove--workerIdleMemoryLimit
then.)(Repro steps are here to see the current vs. expected behavior when attempting to follow the current Troubleshooting steps: https://github.com/bregenspan/jest-idle-memory-limit-bug-repro)