Open cozmy opened 5 years ago
Your repo takes way less time for me:
4ms test case
0.7s according to jest
1.4s according to time
Reruns are pretty much instant.
On Windows, startup performance (of any Node process and especially if you do a lot of IO) will always be noticeably slower. You can try upgrading the Node version to 10 or 11 (8 is quite old), I'd expect a 10-20% speedup coming from that. I'd try to profile the performance issue a bit, but I can't reproduce the terrible performance on my systems :/
Test: 8-10ms Total: 2.4-2.8s
Reruns are very erratic if the rerun was triggered by a file change:
Test: 2-14ms Total: 0.8-3.3s (typically on the upper end though, <2.5s is uncommon).
If it was triggered manually then it is pretty consistent with the first run. Interestingly, if I run with --no-cache
then the first run is similar, but reruns on file changes are 0.8-0.9s and manual reruns are <0.2s. I think this might be because it throws away the timings of each run and causes Jest to run in-band (no extra workers). I usually run in-band on Windows because it's nearly always the fastest option, at least in watch mode. In WSL the performance is not quite as bad (though well short of native Linux) so I typically let Jest do whatever it wants.
As a side note, my runs seems to have been quite a bit faster than yours despite this machine having pretty dated hardware. That's pretty strange in itself.
System:
OS: Windows 8.1
CPU: x64 Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz
Binaries:
Node: 8.11.3
Yarn: 1.7.0
npm: 5.6.0
npmPackages:
jest:
wanted: ^24.1.0
installed: 24.1.0
To give you a bit of background, I was trying to set up the test environment on my work laptop and I noticed that my React tests were way to slow. That's when I've discovered the poor performance. I've also tested this on Linux and it's 2x as fast, but still, 3seconds on re-run for some basic tests it's too much.
It's weird that Jest reports that a test runs in 10ms but the reported time is 5seconds.
Even on my work laptop, running the tests on create-react-app are slow, on re-run:
I guess this issue is a duplicate of https://github.com/facebook/jest/issues/7619 and https://github.com/facebook/jest/issues/6694 so most likely it can be closed.
Mocha doesn't take 4 seconds to initialise in this case. I changed your test to do require('assert').equal(1, 112)
and it took less than 200ms:
fabio@fabio-thinkpad ⼠time mocha index.spec.js
jest performance
1) should be fast
0 passing (6ms)
1 failing
1) jest performance
should be fast:
AssertionError [ERR_ASSERTION]: 1 == 112
+ expected - actual
-1
+112
at Context.<anonymous> (index.spec.js:3:23)
real 0m0.173s
user 0m0.153s
sys 0m0.024s
Which by the way, if you add another test it doesn't take 173ms * 2 to run.
I've never seen a testing framework that takes more than 500ms to perform a single equality check. There must be some configuration option we can set here, and if there is, I would like to make the case for it to be in the defaults.
Just copied the original test 50 times to make sure. Results:
real 0m0.193s
user 0m0.157s
sys 0m0.040s
Tests run fine for me.
I tried this out on my Mac and got run times of 3-4s, so a bit closer to what you are describing. There is nothing that takes way longer than expected in the profile though, as I said it's just that IO performance is very OS-dependent, most of this is require
loading modules:
One thing caught my eye in the flame chart though
Setting up a JSDOM environment is very expensive (the middle part of the chart). Setting testEnvironment
to node
if the tests don't need it cut the test time in half for me. I find it a bit unfortunate that the default is jsdom
. @SimenB has there been any discussion on the default testEnvironment
?
Also, I noticed that watch mode is not running inBand
, which I think it should in this case. After setting the test environment to node and passing --watchAll --runInBand
reruns are pretty much instant. I'll try to find out by Jest doesn't decide to runInBand
automatically in this case.
@SimenB has there been any discussion on the default testEnvironment?
Yes, #5815. If you use jest --init
, it defaults to node
as well. I'd forgotten #5815 was accepted, would be nice to get done for 25.
After some investigation, the reason why Jest does not run the test inBand
is the following:
runInBand
in order to keep the main thread unblocked and make sure the TTY remains responsiveinBand
, the overhead causes it to take about 2.2s on my device (even without JSDOM)SLOW_TEST_TIME
threshold), so it again decides not to runInBand
runInBand
once (and do one rerun, because the first runInBand
execution also has to load everything which is slow), which causes the test time to go down to <100msI don't have a good solution to this other than making the threshold more generous right now
Because this first run is not inBand, the overhead causes it to take about 2.2s on my device (even without JSDOM)
Can you test with node worker threads? Still some overhead, but maybe not so much
@SimenB no noticeable change. jasmine2()
has 1.5s total time when it initially requires all the modules, it's not really the worker overhead itself. In band the modules will be loaded after the first run so subsequent runs are almost instant.
(jest-circus
is not significantly faster to load either)
There's no mechanism in place to reuse workers across watch runs, is there? Would it be unsafe to do so in case there is some kind of leftover state in the worker?
Workers are always reused, do you mean the sandbox? If so, no - we want every test to run in isolation
However, we want to lift jasmine/circus out of the sandbox and inject some integration points. Might help?
Workers are always reused
That's not what I'm seeing though, this is the first part of a rerun:
This is (like some other issues mentioned) definitely a regression introduced in 23.0.0
In 22.4.4, jasmine2()
only takes about 600ms (less than half the time)
That should be fixed - the worker pool should be reused.
Could you bisect?
Will do, but it's not what causes the performance issue here. After quite a bit of profiling I've determined the culprit to be micromatch@3
. I'll open a PR where we can discuss how to handle this.
I've determined the culprit to be
micromatch@3
Really? Interesting! Looking forward to the analysis đ
Still no conclusion? :(
See #8032 / #8046 for the latest info
any updates now that micromatch@4
has released?
Seeing as it's a major, we'll probably upgrade in Jest 25 to be on the safe side
@SimenB i took a look in diagonal to all the posts @jeysal posted. great work!!
questions
Can we put this micromatch fix at the begginning of the work for jest25? so we can play with it and see if it cover the major problem present on windows machines or needs more investigations?
You have a branch next
for futur release or it's master
to contain the micromatch fix ?
Assuming no breaking changes, you can use yarn resolutions to test yourself.
https://yarnpkg.com/lang/en/docs/selective-version-resolutions/
I've been using yarn resolutions + micromatch v4 with no issues.
@jeysal well done!
A little followup my side with a repo test. https://github.com/nasreddineskandrani/benchmark_jest I took the initial code of this issue and did the test. It's working near jest@22 perf. if not better with micromatch4 and jest@24 so for me => it's conclusive. (check README(s))
waiting for jest@25 to try it at the job with Angular and npm (no way to yarn for now...)we have more than 2000 utests :) it ll be quick to see the diff.
@SimenB an idea: you should add an automated test to check the timing it takes in your CI with the previous released version vs the current branch of jest (or one run nightly against master). Let say if its more than 2x the previous time you fail it. I don't know if this way it will be a stable test => but jest needs a benchmark automation :)
would be nice to have the ability to see how long each test takes.
Looks like micromatch
was bumped to 4.x
in https://github.com/facebook/jest/pull/8852
Correct - you can install jest@next
to test it out
Tried v25 with https://www.npmjs.com/package/@angular-builders/jest. Didn't help.
The next
version actually made tests run even longer in my case.
I'm still noticing a performance hit on both memory and execution time with jest@next
.
Peak Memory Usage (jest@23.6.0)
Peak Memory Usage (jest@next)
Execution Time (jest@23.6.0)
real 4m11.202s
user 24m26.390s
sys 2m28.008s
Execution Time (jest@next)
real 7m29.054s
user 38m46.197s
sys 6m17.779s
Any tips on profiling this further? Perhaps a dependency of jest
is still using micromatch@3.x
which is why the yarn resolution
method fixed it for other people? The following jest
dependencies are still tied to micromatch@3.x
:
test-exclude@5.2.3
(part of @jest/transform@25.0.0
)sane@4.0.3
(part of jest-haste-map@^25.0.0
)For added context, my upgrade from 23.6.0
to next
requires a jest.config.js
change that looks like:
setupTestFrameworkScriptFile: '<rootDir>/test-utils/jest.setup.js',
to
setupFilesAfterEnv: [
'<rootDir>/node_modules/regenerator-runtime/runtime.js',
'<rootDir>/test-utils/jest.setup.js'
],
So it's also possible there are other factors in play with my performance numbers.
And just for more information, here is my run with jest@next
and the yarn
resolution for micromatch@4.0.2
:
Peak Memory Usage (jest@next w/ micromatch@4.0.2 resolution):
Note: Memory usage does go down and plateau around 600mb though.
Execution Time (jest@next w/ micromatch@4.0.2 resolution):
real 5m45.275s
user 28m0.123s
sys 5m25.041s
@j3tan one regression has been identified and fixed in #8890, diffs for patch-package
can be found in https://github.com/facebook/jest/issues/7811#issuecomment-527107215. Could you test with those?
sane
has an issue for Micromatch 4 here: https://github.com/amasad/sane/issues/151
test-exclude
seems to have replaced micromatch
with minimatch
a year ago: https://github.com/istanbuljs/istanbuljs/commit/01096aedc7c6a2615028bfab6e86202290514861#diff-d812e77478dc2316c1698d88682d3b05R37
You can run yarn why micromatch
in your project to identify where it comes from. Running it on a clean install of jest@next
gives this output:
[1/4] đ¤ Why do we have the module "micromatch"...?
[2/4] đ Initialising dependency graph...
[3/4] đ Finding dependency...
[4/4] đĄ Calculating file sizes...
=> Found "micromatch@4.0.2"
info Has been hoisted to "micromatch"
info Reasons this module exists
- Hoisted from "jest#jest-cli#@jest#core#micromatch"
- Hoisted from "jest#jest-cli#jest-config#micromatch"
- Hoisted from "jest#jest-cli#@jest#core#@jest#transform#micromatch"
- Hoisted from "jest#jest-cli#@jest#core#jest-haste-map#micromatch"
- Hoisted from "jest#jest-cli#@jest#core#jest-message-util#micromatch"
info Disk size without dependencies: "72KB"
info Disk size with unique dependencies: "256KB"
info Disk size with transitive dependencies: "404KB"
info Number of shared dependencies: 7
=> Found "sane#micromatch@3.1.10"
info Reasons this module exists
- "jest#jest-cli#@jest#core#jest-haste-map#sane" depends on it
- Hoisted from "jest#jest-cli#@jest#core#jest-haste-map#sane#anymatch#micromatch"
info Disk size without dependencies: "108KB"
info Disk size with unique dependencies: "1.91MB"
info Disk size with transitive dependencies: "4.48MB"
info Number of shared dependencies: 29
So as soon as sane
updates it should be gone from the dependency tree
Thanks for the feedback, I misread the diff in my upgrade for test-exclude
Here is my yarn why micromatch
:
=> Found "micromatch@3.1.10"
info Has been hoisted to "micromatch"
info Reasons this module exists
- Hoisted from "readdirp#micromatch"
- Hoisted from "add-asset-html-webpack-plugin#micromatch"
- Hoisted from "anymatch#micromatch"
- Hoisted from "webpack#micromatch"
- Hoisted from "webpack-cli#findup-sync#micromatch"
- Hoisted from "globby#fast-glob#micromatch"
- Hoisted from "stylelint-config-rational-order#stylelint#micromatch"
- Hoisted from "jest#jest-cli#@jest#core#jest-haste-map#sane#micromatch"
Looks like I'll have to wait for sane
to upgrade to verify, though I may try out https://yarnpkg.com/lang/en/docs/selective-version-resolutions/
targeting sane
if I have the chance
Running with the patch-package
fix you mentioned yielded good results!
real 2m47.592s
user 15m9.926s
sys 1m31.482s
Memory usage still peaked around 800MB per node
process though.
And what to do for npm users?)
NPM users can still use https://www.npmjs.com/package/patch-package
You can either downgrade to jest@23.6.0
, use patch-package
with the diff found here https://github.com/facebook/jest/issues/7811#issuecomment-527107215, or wait for https://github.com/facebook/jest/pull/8890 to merge and use jest@next
Edit: https://github.com/facebook/jest/pull/8890 is merged but has not been released yet, please wait for an official announcement before trying to use jest@next
. Follow along in this issue instead (https://github.com/facebook/jest/issues/7811#issuecomment-529203848)
So #8890 was merged, but nothing has changed(
@romua it's merged, but not released yet. Patience.
Makes a lot of sense. Thanks for the answer)
@thymikee any updates?)
"jest": "^25.0.0" still sooo slooooooow, subsequent save/test 7.89s. One file 4 simple asserts. "test": "jest --watch -- calc.test.js" --runInBand does not make significant difference. 4.29s
Tried also: "test": "jest --watch --runInBand --env=node -- calc.spec.js"no change.
In comparison jasmin
=> 1s and mocha
blazing fast => 50ms - 500ms ...
It takes a long time for very simple tests
jest: v25.1.0
It is undeterministic though, sometime it completes within 2s.
experiencing the same issue with "jest": "^25.1.0",
The main startup delay seems to come from require()-ing all the dependencies.
time-require
reports around 10 seconds worth of require() calls on my current system.
Main culprits, according to that tool are:
@babel/preset-env
takes 3.6 seconds)For JSDOM, there is already some work for cutting down dependencies and thus improving load times. They also have an active issue regarding request
, which takes my machine .25 seconds to load.
An actionable item for the jest team might be replacing sane
.
I'm trying to do some looking into this and when I profile a suite of 100 example tests (which takes a bit over 8s w/ jest --runInBand
, and about 1.5s w/ mocha), I see that vm
's compileFunction
is taking most of the time.
As far as I can tell, it's called for over 700 modules for each test file. Below is the list of files. Pretty much all of these are files I would not want to be loaded more than once.
Is there a way to disable the registry isolation for tests? (I realize this will break mocking and I am quite okay with that).
Also timing and files compiled are pretty much identical w/ and w/o jest-circus.
Adding --runInBand
didn't help in my case. I have this in my package.json. Should I get an improvement?
"scripts": {
"test": "jest --runInBand"
},
I was using Jasmine before when a frontend developer told me OMG you're not using Jest, you're so 2017! So I tried Jest and bam, now I wait 4 seconds for the simplest test.
đ Bug Report
I've created a simple test with no configuration. You can find it here. It takes 7.6 seconds just to start jest, and then, on file change, it takes 5 seconds to re-run. There's clearly something wrong here.
Mocha and Chai, for example, start in around 4 seconds, but afterward, the re-run is almost instant.
It's weird that Jest reports that the test takes 14ms but shows me the result in 5 seconds.
On top of this, if you add more basic tests, the time just seems to go up.
To Reproduce
Clone my minimal repository,
yarn install
andyarn test
.Expected behavior
It should be faster than this.
Link to repl or repo (highly encouraged)
https://github.com/cozmy/jest-performance
Run
npx envinfo --preset jest
Paste the results here: