jestjs / jest

Delightful JavaScript Testing.
https://jestjs.io
MIT License
44.34k stars 6.47k forks source link

Jest 24 is slower than Jest 23.6.0 on the same test suite #7811

Closed vitalibozhko closed 5 years ago

vitalibozhko commented 5 years ago

🐛 Bug Report

Filing bug per #7732 https://github.com/facebook/jest/issues/7732#issuecomment-460676324 Jest v24 takes 38 seconds vs v23 28 seconds to run 50 suites (296 tests).

To Reproduce

Steps to reproduce the behavior: Jest v23 setup: .babelrc

{
  "presets": ["es2015", "react", "stage-3", "env"]
}

package.json:

...
    "babel-core": "6.26.0",
    "babel-jest": "22.4.1",
    "babel-loader": "7.1.2",
    "babel-preset-env": "1.6.1",
    "babel-preset-es2015": "6.24.1",
    "babel-preset-react": "6.24.1",
    "babel-preset-stage-3": "6.24.1",
    "jest": "23.6.0",
...

Jest v24 setup: .babelrc

{
  "presets": ["@babel/preset-env", "@babel/preset-react"],
  "plugins": [
    "@babel/plugin-syntax-dynamic-import",
    "@babel/plugin-syntax-import-meta",
    ["@babel/plugin-proposal-class-properties", { "loose": false },
    "@babel/plugin-proposal-json-strings"
  ]
}

package.json:

...
    "@babel/core": "7.2.2",
    "@babel/plugin-proposal-class-properties": "7.3.0",
    "@babel/plugin-proposal-json-strings": "7.2.0",
    "@babel/plugin-syntax-dynamic-import": "7.2.0",
    "@babel/plugin-syntax-import-meta": "7.2.0",
    "@babel/preset-env": "7.3.1",
    "@babel/preset-react": "7.0.0",
    "jest": "24.1.0",
...

Jest config is not changed between v23 and v24:

module.exports = {
  'verbose': false,
  'coverageThreshold': {
    'global': {
      'branches': 40,
      'functions': 45,
      'lines': 50,
      'statements': 50
    }
  },
  'projects': [
    '<rootDir>/src/test/js/reactapp'
  ],
  'moduleDirectories': [
    'src',
    'node_modules'
  ],
  'testURL': 'http://mockserver:9999',
  'collectCoverage': true,
  'coverageDirectory': '<rootDir>/build/coverage/',
  'coverageReporters': [
    'json',
    'html'
  ]
};

Expected behavior

Jest v24 running performance matches v23.

Link to repl or repo (highly encouraged)

Not available

Run npx envinfo --preset jest

Paste the results here:

System:
    OS: Windows 7
    CPU: (4) x64 Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz
  Binaries:
    Node: 8.12.0 - C:\Program Files\nodejs\node.EXE
    npm: 6.4.1 - C:\Program Files\nodejs\npm.CMD
thymikee commented 5 years ago

Would be great if you could attach a repro that one can checkout and analyze.

oskarolsson-jimdo commented 5 years ago

We're also seeing quite significant performance regression for Jest 24.

  System:
    OS: Linux 4.15 Ubuntu 18.04.1 LTS (Bionic Beaver)
    CPU: (12) x64 Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
  Binaries:
    Node: 10.15.0 - ~/.nvm/versions/node/v10.15.0/bin/node
    Yarn: 1.13.0 - ~/.nvm/versions/node/v10.15.0/bin/yarn
    npm: 6.4.1 - ~/.nvm/versions/node/v10.15.0/bin/npm
  npmPackages:
    jest: 24.1.0 => 24.1.0 

This is with 1400 tests on a large code base. Running a custom typescript transformer.

--all Jest 23.6.0: 13s Jest 24.1: 17s

--all --no-cache Jest 23.6.0: 60s Jest 24.1: 150s

--all --runInBand Jest 23.6.0: 36s Jest 24.1: 40s

--all --no-cache --runInBand Jest 23.6.0: 60s Jest 24.1: 65s

vitalibozhko commented 5 years ago

I'd love to provide repo for a faster resolution but as most people here i can't share private projects. If someone could advise an open source project that uses jest for tests we could tweak it out and see if it happens there.

mkreiser commented 5 years ago

Here's an open source I built with jest for testing. Runs both traditional units tests and integration tests with live API calls. I definitely noticed significant slow downs when trying to upgrade to jest@24.

Unit tests and integration tests typically run in ~10s each, so should be easy to quickly do some verification.

https://github.com/mkreiser/ESPN-Fantasy-Football-API

mkreiser commented 5 years ago

To be more useful, I ran some stats. Unfortunately, ESPN's API appears to be down so I could not run the integration tests (which could be more interesting) as well.

env (via npx envinfo --preset jest)

System:
    OS: macOS 10.14
    CPU: (8) x64 Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
  Binaries:
    Node: 8.15.0 - ~/.nvm/versions/node/v8.15.0/bin/node
    npm: 6.7.0 - ~/.nvm/versions/node/v8.15.0/bin/npm

Branches

master = jest@23.6, jest-24 = jest@24.1

Unit test performance (451 tests in 16 suites) (npm run test:unit)

jest@23.6:

PASSED OPTIONS TIME
(no options) 3.5s
--maxWorkers=2 3.4s
--no-cache 8.6s
--no-cache --maxWorkers=2 6.8s
--runInBand 3.8s
--runInBand --no-cache 8.3s

jest@24.1:

PASSED OPTIONS TIME
(no options) 4.4s
--maxWorkers=2 4.7s
--no-cache 13.4s
--no-cache --maxWorkers=2 17.6s
--runInBand 5.1s
--runInBand --no-cache 9.3s
vitalibozhko commented 5 years ago

Checked on this repo tests https://github.com/LambdaSchool/React-Testing, results: v23:

 PASS  src/__tests__/App.spec.js
  <App />
    ✓ renders without crashing (13ms)

 PASS  src/__tests__/Panel.spec.js
  <Panel />
    ✓ renders without crashing (13ms)

 PASS  src/__tests__/Display.spec.js
  <Display />
    ✓ renders without crashing (14ms)

 PASS  src/__tests__/Button.spec.js
  <Button />
    ✓ renders without crashing (2ms)

Test Suites: 4 passed, 4 total
Tests:       4 passed, 4 total
Snapshots:   0 total
Time:        4.158s
Ran all test suites.

v24

 PASS  src/__tests__/Display.spec.js
  <Display />
    ✓ renders without crashing (13ms)

 PASS  src/__tests__/Panel.spec.js
  <Panel />
    ✓ renders without crashing (17ms)

 PASS  src/__tests__/App.spec.js
  <App />
    ✓ renders without crashing (23ms)

 PASS  src/__tests__/Button.spec.js
  <Button />
    ✓ renders without crashing (4ms)

Test Suites: 4 passed, 4 total
Tests:       4 passed, 4 total
Snapshots:   0 total
Time:        5.182s
Ran all test suites.

Verbose mode does not produce consistent timings though, however in non-verbose v24 is on average 1 second slower than v23 (3.9s vs 4.9s on my machine). Here's the archive with two setups https://www.dropbox.com/s/mgbviexw96l2o1z/JestReact-Testing%2023vs24.zip?dl=0

oskarolsson-jimdo commented 5 years ago

@thymikee Can we remove the Windows-tag here? This is definitely present on MacOS and Ubuntu as well.

tbergquist-godaddy commented 5 years ago

Jest v24:

Test Suites: 96 passed, 96 total
Tests:       276 passed, 276 total
Snapshots:   129 passed, 129 total
Time:        113.429s

Jest v23:

Test Suites: 96 passed, 96 total
Tests:       276 passed, 276 total
Snapshots:   129 passed, 129 total
Time:        17.288s
SimenB commented 5 years ago

Woah, 17s vs 113s, that's huge! Any possibility of sharing that project?


I think we need to start really considering adding performance monitoring to Jest. @aaronabramov talked about this in #4471. Node comes with essentially the same performance API the browser does (available since node 8): https://nodejs.org/api/perf_hooks.html, maybe we can use that to measure? Having some way of emitting how long different things takes should be a first step probably (so #4471, but that only talks about what happens inside of tests)

tbergquist-godaddy commented 5 years ago

Sure, it is this repo.

Currently I do not want to upgrade to v24 due to this difference 😞

SimenB commented 5 years ago

Thanks!

On my machine, that takes 35-40 sec on a cold cache, and 7-8 sec on a warm cache. With jest 24, I get 40-45 sec on cold cache and 17-18 sec on warm cache. Definitely shows the problem (although not as severe as on your machine). I'll try to find some time digging into this

Currently I do not want to upgrade to v24 due to this difference

Understandable!

tbergquist-godaddy commented 5 years ago

Maybe I ran v23 with cache and v24 without 🤔 .

But we also have another repo where the tests got noticeably slower after upgrading. I don't have any official timings though.

vitalibozhko commented 5 years ago

I suspect Babel 7 significantly contributes to the issue, check out the commit 1818d84 and the one before, i get 0.5 seconds increase for 4 simple tests.

MischkowskyM commented 5 years ago

Tested with npx jest; npx jest and the later time taken (to get the hot cache).

version tests time
23.6.0 5749 59.13s
24.1.0 5749 144.5s
24.8.0 5812 125.88s

The percentage difference between hot and cold cache is also much greater

version hot cold difference
23.6.0 58s 65s +12%
24.1.0 140s 190s +35%
24.8.0 126s 145s +16%

Even worse is the difference between jest and jest --no-cache

version jest (hot cache) jest --no-cache difference
23.6.0 59s 113s +92%
24.1.0 144s >400s + >200%
24.8.0 126s 1318s + >1000%

I aborted the --no-cache run on 24.1 after 400 seconds. With 2669/5749 tests 113 / 621 suites done. So the total execution time is probably in the 800-900 seconds range.

Edit: Jest 24.8.0 was a big improvement. Slightly faster run time despite more tests.

damianfrizzi commented 5 years ago

Is there someone working on this as there is a clear repro? We're also seeing a massive performance regression when updating to Jest 24. Due to https://github.com/facebook/jest/issues/7709 we'd have to upgrade to Jest 24. Right now we're using a resolution to 2.4.1. for write-file-atomic which is surely not ideal...

SimenB commented 5 years ago

There is #8032 which has identified one cause of regression

agrasley commented 5 years ago

We've recently switched from Babel 6/Jest 23 to Babel 7/Node 24 and have noticed similarly stark speed regressions (from ~150s to ~200s with a clean cache and ~100s to ~160s cached). Doesn't appear to be Babel 7 related, since running Babel compilation by itself is actually faster now.

arkanmgerges commented 5 years ago

Hi There, I would like to share my experience here with Jest. I like jest but the problem that I'm having it takes a lot of time to run 6 tests, it took 13.4 seconds

 PASS  test/domain/model/Actor.test.ts (12.193s)
  Actor creation
    √ should create an actor (5ms)
    √ should create an actor will publish ActorCreated event (3ms)
  Actor roles
    √ should create a role and to be an instance of it (15ms)
    √ should publish CreatorRoleAssignedToActor event when add a creator role to an actor (2ms)
    √ should publish UserRoleAssignedToActor event when add a user role to an actor (5ms)
    √ should create a user actor and publish UserRoleAssignedToActor domain event (1666ms)

Test Suites: 1 passed, 1 total
Tests:       6 passed, 6 total
Snapshots:   0 total
Time:        13.4s
Ran all test suites.

Watch Usage: Press w to show more.
npx envinfo --preset jest
npx: installed 1 in 2.847s

  System:
    OS: Windows 10
    CPU: (8) x64 Intel(R) Core(TM) i7-4700HQ CPU @ 2.40GHz
  Binaries:
    Node: 11.7.0 - C:\Program Files\nodejs\node.EXE
    npm: 6.5.0 - C:\Program Files\nodejs\npm.CMD

Also in the package.json

"scripts": {
  "test": "jest",
  "test:watch": "jest --watch",
},
"devDependencies": {
   "@types/jest": "^24.0.12",
   "jest": "^24.5.0",
   "jest-coverage-badges": "^1.1.2",
   "jest-junit": "^6.4.0",
   "ts-jest": "^24.0.2",
...etc
}

jest.config.js

module.exports = {
    preset: "ts-jest",
    testEnvironment: "node",
    coverageDirectory: "doc/code-coverage",
    collectCoverageFrom: [
        "src/**/*.{js,ts}",
        "!**/node_modules/**",
        "!**/vendor/**",
        "!src/index.ts"
    ],
    coverageReporters: ["json", "html", "json-summary", "jest-junit", "lcov"]
};

I'm using it with typescript:

in my sh test folder, I have a file with 6 tests:

Then I tried to use mocha

npm i -D chai @types/chai mocha @types/mocha sinon @types/sinon sinon-chai @types/sinon-chai source-map-support nyc

and I put mocha.opts inside 'test' folder

--require ts-node/register
--watch-extensions ts
test/**/*.ts

and in the package.json:

    "scripts": {
        "test:watch": "mocha --opts test/mocha.opts --watch"
    },
...etc

I ran the same tests (only I modifed the assertions to use sinon-chai), and it ran amazingly fast (about 1-2 second(s)),

Actor creation
    √ should create an actor
    √ should create an actor will publish ActorCreated event

  Actor roles
    √ should create a role and to be an instance of it
    √ should publish CreatorRoleAssignedToActor event when add a creator role to an actor
    √ should publish UserRoleAssignedToActor event when add a user role to an actor
    √ should create a user actor and publish UserRoleAssignedToActor domain event (325ms)

  6 passing (336ms)

Something is not good, and I don't know what is the problem.

martabacc commented 5 years ago

Anything we can do regarding this? Any approach advice? It's alarmingly slow btw.

rmallols commented 5 years ago

We ended up migrating to Mocha (+Chai and Sinon), which reported a significant improvement on the testing time.

omril1 commented 5 years ago

I'm considering going back to jest 23 until it's fixed, it's about 3-5 times slower than before for me.

deleteme commented 5 years ago

The fastest jest release has been 22.4.4. All newer versions have been much slower.

martabacc commented 5 years ago

@deleteme how is v23 compared to v22? Is the speed difference significant? I want to propose downgrade to 22 but my colleague said the version is too low (compared to 24)

ELLIOTTCABLE commented 5 years ago

Hi! I don't want to add to the noise of this Issue, so I'll keep it simple and add one more open-source reproduction:

Project with large code-size (in particular, a single 2-megabyte-ish generated lexer), where a single change of upgrading Jest 23.0.x to Jest v24.0.x caused test-times to jump from ~7 seconds (~5s per test-file), to ~147 (~70s for each individual test-file). Yes, a regression of 2,000%. 🤯

Project is ELLIOTTCABLE/excmd.js; commit ELLIOTTCABLE/excmd.js@f0982114 used Jest 23, and consistently reports <10s runtimes on jest --clearCache && time jest; the next commit, ELLIOTTCABLE/excmd.js@d4d3a08d, updates Jest to v24, and shows consistent cache-cleared timing of ≥100s.

Again, so it doesn't slip by — I strongly suspect the performance regression has something to do with filesystem copying or module loading; because the thing that makes my project an outlier is definitely the loading of a single, hairy, two megabyte .js file. (An unrelated bug.)

I also am not convinced it's related to Babel; Babel happily deoptimizes the styling of that file and continues on its merry way, showing a wall-clock build-time of 4.48s.


(Because I'm sure this issue is a stressful one (lots of complaining users, difficult to reproduce, and performance-related and thus difficult to debug …), I'd like to add: we love Jest, and your hard maintenance work, even the triage where you say “no”, is greatly appreciated. ❤️)

deleteme commented 5 years ago

@ronayumik Ever since Jest 22.4.4, as new versions of Jest are released, and when I'm interested in upgrading, I compare how long it takes to run our tests (412 test suites, 2015 tests).

The following gist includes several benchmarks using Jest 22, 23, and 24.

https://gist.github.com/deleteme/25fcca7cd16aceb30cc5f50e022aa706

So far, every new version has been significantly slower.

thymikee commented 5 years ago

@deleteme have you tried with micromatch pinned to v4?

deleteme commented 5 years ago

@thymikee No. Do you want me to checkout the master branch of Jest, edit the micromatch dependency, and run it against our tests?

thymikee commented 5 years ago

Nah, you can just use Yarn resolutions

deleteme commented 5 years ago

@thymikee I've updated my benchmark gist to include a comparison of jest @ 24.8.0 with and without the micromatch 4.0.2 yarn resolution.

https://gist.github.com/deleteme/25fcca7cd16aceb30cc5f50e022aa706#benchmark-jest-2244-2480-with-micromatch-yarn-resolution-2480-without-micromatch-yarn-resolution

jeysal commented 5 years ago

That's still pretty bad :/ Still feel like something must be wrong with jest-runtime, but idk what

smelukov commented 5 years ago

This is not a jest issue. It's about babel 7. We've update babel 6 to 7 in our project and got about 30% bundling performance degradation for the same codebase

jeysal commented 5 years ago

@smelukov That's an interesting possibility, I hadn't thought about that before. Given that these performance issues rarely have a single cause or solution Babel could indeed be another factor. Have you managed to make this reproducible on a smaller scale and reported an issue with Babel?

agrasley commented 5 years ago

@smelukov Our Babel compile times are faster with 7, but Jest is slower.

Edit: Also if I'm not mistaken, a Babel slowdown would only affect uncached builds, but we're seeing it for cached builds as well.

smelukov commented 5 years ago

@jeysal I'm doing some things to find a "bottleneck" I'll report when done ;)

@agrasley we are using a cached build

SergiuB commented 5 years ago

We moved out of Babel (now fully TS with ts-jest) and seeing this regression as well from 23 to 24 (+40% time increase for cached tests, +100% for non-cached tests). The memory also increases. Pinning micromatch to v4 had very little benefit. Using the most recent jsdom (15) also did not help.

oshalygin commented 5 years ago

We can't get past our CI build because of heap tko issues on the latest version of jest(24).

Anyone have any short term solutions? I'm willing to help PR something if someone can guide me on where to start :)

oshalygin commented 5 years ago

Looking at the code for the --coverage flag, that seems to be the root cause of our heap issues.

There are a ton of code pushed re: coverage over the past few months so I'm not sure what the source may be. I'll do a bit more digging.

csvan commented 5 years ago

This is not a jest issue. It's about babel 7.

It's absolutely a Jest issue. We used Babel 7 under both Jest 23 and 24 and saw a severe drop on 24 regardless.

omril1 commented 5 years ago

I updated from 24.7.1 to 24.8.0 and it's working fast again, even better than 23.6.0 was. For ~400 tests it used to take 50-90 seconds, now it takes 28-32 seconds. Can anyone else try and confirm?

dmitriivikorchuk commented 5 years ago

I updated from 24.7.1 to 24.8.0 and it's working fast again, even better than 23.6.0 was. For ~400 tests it used to take 50-90 seconds, now it takes 28-32 seconds. Can anyone else try and confirm?

It doesn't help. 1630 tests - 1152.588s. Private project. --clearCache and type checking enabled ("isolatedModules": false) micromatch 4.0.2 yarn resolution didn't help us either.

oshalygin commented 5 years ago

Will try the latest version(24.8.0) shortly and report back.

EDIT: Just as slow on 24.8.0. Anyone have a workaround? We have ~5500 tests and literally can't run them on the latest version(dies in CI due to timeouts or takes all eternity(20+ mins) vs the 3mins it used to take).

oshalygin commented 5 years ago

I'm going to look at the source code and do a few diffs to see what I can find. I may be able to at least get started with some help.

If anyone already started pls ping or reply in this thread, much love.

vitalibozhko commented 5 years ago

I hopefully spotted some places that slows jest 24 down comparing to jest 22, here's the link if someone is willing to try it out https://github.com/vitalibozhko/jest/tree/24.8.0-perf-spots (a couple of tweaks and some minor features disabled just to test performance, not a real fix) I get nearly the same time on this branch as it was on jest 22 on our private project setup.

To try this out just follow https://github.com/facebook/jest/blob/master/CONTRIBUTING.md#how-to-try-a-development-build-of-jest-in-another-project I'd also suggest to either run jest by means of package.json scripts or via ./node_modules/.bin/jest to make sure the linked version is utilized (i personally use the latter). One way to verify the link is properly set is by running ./node_modules/.bin/jest --version in terminal - it should give 24.8.0-dev

SimenB commented 5 years ago

That's great @vitalibozhko! Looking at the diff, I see 3 main changes

  1. Upgrade to micromatch@4
  2. Move stack collection (maybe even remove? I just skimmed through)
  3. Do not use pretty-format to format thrown non-errors

Does that seems right? If yes, which parts had the most impact? Some comments on the changes:

  1. We'll be upgrading micromatch in jest 25.
  2. I was afraid that being eager to collect the stacks would have a negative impact on perf: https://github.com/facebook/jest/pull/6965#issuecomment-421612367. Not sure what to do here - stacks are important for understanding what went wrong if tests fail. Maybe we could add an option for it, so people can opt-in to better traces? Which they'd activate when tests fail to (potentially) get a better trace
  3. I'm a bit surprised by the pretty-format change - I wouldn't expect that to be heavy as most of the time Errors are thrown and that code path is not activated.
Primajin commented 5 years ago

Opting in sounds like a good idea - that way I can run jest fast on ci and more verbose locally when I really want to debug something.

nol13 commented 5 years ago

@vitalibozhko assuming im linking your branch correctly, seems to be more or less the same as 24.8.0 for me. I feel like the first time I ran it it was faster, so maybe something off there, but now I'm consistently seeing roughly the same times as 24.8.0. Pretty sure it's using the build from your branch anyway. Just migrated to jest so most of our tests are still using chai-expect and sinon if that matters.

260 suites / 1583 tests / mbp quad core / 2nd run times shown

22.4.4: 47s 23.6.0: 54s 24.8.0: 72s 24.8.0-perf-spots: 72s

Edit: Node version seems to have a large effect on this as well, going to node 10 from node 8 takes 10-15s off of the above times.

(I tried to test off of the latest master commit too but was getting syntaxish errors, and was still getting same errors when i went back to re-link vitalibozhko's branch, and I don't feel like messing with it anymore so..)

vitalibozhko commented 5 years ago

@SimenB

I see 3 main changes

Upgrade to micromatch@4 Move stack collection (maybe even remove? I just skimmed through) Do not use pretty-format to format thrown non-errors Does that seems right?

That looks correct

which parts had the most impact?

I'd say the most noticeable impact comes from requiring of "heavy" modules like pretty-format even though those are only needed under certain circumstances (e.g. spec uses snapshots, failed spec need to format the error, etc.). The changes made were the most obvious places from profile flame chart. In terms of CPU load there is one obvious thing - getAllCoverageInfo using deepCyclicCopyObject (7% of total time), replaced with JSON.parse(JSON.stringify()), it fits just fine for coverage cloning - in case of my project it reduces total cloning time from 2.3 s to 0.4s

I was afraid that being eager to collect the stacks would have a negative impact on perf: #6965 (comment). Not sure what to do here - stacks are important for understanding what went wrong if tests fail. Maybe we could add an option for it, so people can opt-in to better traces? Which they'd activate when tests fail to (potentially) get a better trace

Put the stack collection back - got plus 1-2s (15.5 > 16.5-17.x) to the total time on hot cache with in band execution. Test Suites: 66 passed, 66 total Tests: 16 skipped, 441 passed, 457 total Snapshots: 9 passed, 9 total Time: 17.398s

It might be worth trying to measure the retaining memory without stack evaluation in place, my guess is that it would only become an issue in watch mode.

I'm a bit surprised by the pretty-format change - I wouldn't expect that to be heavy as most of the time Errors are thrown and that code path is not activated.

As mentioned above it's not the execution time that makes difference, but the time to require a module

vitalibozhko commented 5 years ago

btw, i'm checking performance on node v11.10.0

mucsi96 commented 5 years ago

Let me share my profiling here. Specs:

Steps:

  1. npx create-react-app my-app --typescript
  2. change App.test.tsx
  3. run npm test

CPU Profile: image CPU-20190801T141211.zip

Is it expected that 15 seconds are spent only with requring modules for single trivial React component and test?

Can someone with more experience on CPU profiling take a look?

csvan commented 5 years ago

I ran a test on 24.9 vs 24.8 on one of our suites. No improvements unfortunately.

24.8, cold cache:

Test Suites: 3 skipped, 112 passed, 112 of 115 total
Tests:       5 skipped, 494 passed, 499 total
Snapshots:   5 passed, 5 total
Time:        29.323s
Ran all test suites.
✨  Done in 30.65s.

24.8, warm cache:

Test Suites: 3 skipped, 112 passed, 112 of 115 total
Tests:       5 skipped, 494 passed, 499 total
Snapshots:   5 passed, 5 total
Time:        19.109s, estimated 26s
Ran all test suites.
✨  Done in 19.91s.

24.9, cold cache:

Test Suites: 3 skipped, 112 passed, 112 of 115 total
Tests:       5 skipped, 494 passed, 499 total
Snapshots:   5 passed, 5 total
Time:        29.684s
Ran all test suites.
✨  Done in 30.57s.

24.9, warm cache:

Test Suites: 3 skipped, 112 passed, 112 of 115 total
Tests:       5 skipped, 494 passed, 499 total
Snapshots:   5 passed, 5 total
Time:        20.909s, estimated 27s
Ran all test suites.
✨  Done in 21.65s.