vitest-dev / vitest

Next generation testing framework powered by Vite.
https://vitest.dev
MIT License
12.62k stars 1.13k forks source link

Vitest run times very slow on a AWS T3.xlarge instance vs Jest with same configuration #5668

Open alireza-salemi opened 4 months ago

alireza-salemi commented 4 months ago

Describe the bug

Hello,

I am writing because I see high collection times when I run our 1100 React tests with Vitest in docker on AWS T3.xlarge instance.

On my local environment I get the following metrics on Macbook Pro M1 (8 CPUs). The test run takes here only 130 seconds.

In Docker on AWS T3.xlarge instance (4CPU) I see the following. The test run takes 680 seconds. Why is the collect number os high?

In Docker on AWS T3.xlarge instance (4CPU) using Jest. The test run takes 495 seconds.

Any Idea why this would happen?

PS:

I am using the following versions:

    "vitest": "^1.5.2",
    "vitest-canvas-mock": "^0.3.3",
    "vitest-fetch-mock": "^0.2.2"

I use the following setting in package.json to run the test on the AWS machine and I have tried pool=theads but it gets slower! CI=true vitest run --coverage.enabled=true --pool=forks --minWorkers=8 --maxWorkers=8

Below is my vitets.config.ts file

import react from '@vitejs/plugin-react';
import {defineConfig} from 'vitest/config';

export default defineConfig({
  plugins: [react()],
  test: {
    css: true,
    environment: 'jsdom',
    globals: true,
    include: ['**/*.{test,spec}.[t]s?(x)'],
    reporters: process.env.CI ? ['default', 'junit'] : ['default'],
    root: './src',
    outputFile: {
      junit: process.env.CI ? `${__dirname}/reports/junit.xml` : `${__dirname}/coverage/junit.xml`
    },
    setupFiles: 'setupTests.js',
    snapshotSerializers: ['enzyme-to-json/serializer'],
    coverage: {
      provider: 'v8',
      processingConcurrency: 8,
      clean: false,
      reportsDirectory: process.env.CI ? `${__dirname}/reports` : `${__dirname}/coverage`,
      reporter: ['text', 'cobertura'],
      extension: ['.ts', '.tsx'],
      exclude: [
        '**/*.d.ts',
        '**/node_modules/**',
        '**/vendor/**',
        'src/store/lookerQueries/queries/**.ts',
        'src/components/chart/**'
      ]
    }
  }
});

setup.js

looks like the following:

import '@testing-library/jest-dom';
import {configure} from 'enzyme';
import Adapter from '@cfaester/enzyme-adapter-react-18';
import 'jest-styled-components';
import createFetchMock from 'vitest-fetch-mock';
import IntlPolyfill from 'intl';
import 'intl/locale-data/jsonp/pt';
import 'vitest-canvas-mock';
import {vi} from 'vitest';
import {cleanup} from '@testing-library/react';

afterEach(() => {
  vi.clearAllMocks();
  cleanup();
});

global.jest = vi;

configure({adapter: new Adapter()});

const localStorageMock = {
  getItem: vi.fn(),
  setItem: vi.fn(),
  clear: vi.fn()
};
const sessionStorageMock = {
  getItem: vi.fn(),
  setItem: vi.fn(),
  clear: vi.fn()
};

global.localStorage = localStorageMock;
global.sessionStorage = sessionStorageMock;
Object.defineProperty(window, 'location', {
  value: {...window.location, reload: vi.fn()}
});

if (window.document) {
  window.document.createRange = () => ({
    setStart: () => {},
    setEnd: () => {},
    commonAncestorContainer: {
      nodeName: 'BODY',
      ownerDocument: document
    }
  });
}

global.Blob = function (content, options) {
  return {content, options};
};

global.Intl = IntlPolyfill;
if (global.Intl.__disableRegExpRestore) {
  global.Intl.__disableRegExpRestore();
}

global.google = {
  maps: {
    Size: class {
      constructor(width, height) {}
    },
    ImageMapType: class {
      constructor(options) {}
    },
    LatLng: class {
      constructor(lat, lng) {
        this.lat = lat;
        this.lng = lng;
      }
    },
    Point: class {
      constructor(x, y) {
        this.x = x;
        this.y = y;
      }
    },
    OverlayView: class {
      constructor() {}
      getPanes() {}
      getProjection = () => ({
        // Note: This simple (and incorrect) mapping is only for testing purposes
        fromLatLngToDivPixel: ({lat, lng}) => ({x: lng, y: lat}),
        fromContainerPixelToLatLng: ({x, y}) => ({lng: x, lat: y})
      });
      setMap(map) {}
      static preventMapHitsAndGesturesFrom(ref) {}
    }
  }
};

const fetchMocker = createFetchMock(vi);
fetchMocker.enableMocks();

Reproduction

I have 1111 test and I am not able to reproduce this in smaller scale

System Info

System:
    OS: macOS 12.5
    CPU: (10) arm64 Apple M1 Max
    Memory: 5.93 GB / 32.00 GB
    Shell: 5.8.1 - /bin/zsh
  Binaries:
    Node: 18.18.0 - ~/.nvm/versions/node/v18.18.0/bin/node
    npm: 9.8.1 - ~/.nvm/versions/node/v18.18.0/bin/npm
  Browsers:
    Chrome: 124.0.6367.119
    Safari: 15.6

Used Package Manager

npm

Validations

alireza-salemi commented 2 months ago

The latest version of Vitest 1.6.0 made the performance on AWS slower. Jest takes only 500 seconds. Below the best time is 818.183 which is 300 seconds slower.

[2024-06-11T23:31:20.136Z] Benchmark 1: CI=true node --no-compilation-cache --max-old-space-size=1024 node_modules/vitest/dist/cli.js --coverage.enabled=true --minWorkers=1 --maxWorkers=1

[2024-06-12T01:42:57.414Z] Time (mean ± σ): 1944.480 s ± 63.104 s [User: 1802.588 s, System: 232.582 s]

[2024-06-12T01:42:57.414Z] Range (min … max): 1901.200 s … 2016.887 s 3 runs

[2024-06-12T01:42:57.414Z] Benchmark 2: CI=true node --no-compilation-cache --max-old-space-size=1024 node_modules/vitest/dist/cli.js --coverage.enabled=true --minWorkers=2 --maxWorkers=2

[2024-06-12T02:58:59.407Z] Time (mean ± σ): 1130.736 s ± 33.154 s [User: 1986.188 s, System: 270.670 s]

[2024-06-12T02:58:59.407Z] Range (min … max): 1094.696 s … 1159.938 s 3 runs

[2024-06-12T02:58:59.407Z] Benchmark 3: CI=true node --no-compilation-cache --max-old-space-size=1024 node_modules/vitest/dist/cli.js --coverage.enabled=true --minWorkers=3 --maxWorkers=3

[2024-06-12T04:01:51.877Z] Time (mean ± σ): 944.267 s ± 14.383 s [User: 2240.765 s, System: 318.537 s]

[2024-06-12T04:01:51.877Z] Range (min … max): 928.660 s … 956.989 s 3 runs

[2024-06-12T04:01:51.877Z] Benchmark 4: CI=true node --no-compilation-cache --max-old-space-size=1024 node_modules/vitest/dist/cli.js --coverage.enabled=true --minWorkers=4 --maxWorkers=4

[2024-06-12T04:56:39.297Z] Time (mean ± σ): 822.763 s ± 6.932 s [User: 2346.223 s, System: 347.783 s]

[2024-06-12T04:56:39.297Z] Range (min … max): 818.183 s … 830.738 s 3 runs

[2024-06-12T04:56:39.297Z] Benchmark 5: CI=true node --no-compilation-cache --max-old-space-size=1024 node_modules/vitest/dist/cli.js --coverage.enabled=true --minWorkers=5 --maxWorkers=5

[2024-06-12T05:58:32.352Z] Time (mean ± σ): 924.836 s ± 21.396 s [User: 2408.339 s, System: 384.649 s]

[2024-06-12T05:58:32.352Z] Range (min … max): 904.333 s … 947.025 s 3 runs

sheremet-va commented 2 months ago

Try using --pool=vmThreads

alireza-salemi commented 2 months ago

[sheremet-va] I tried vmThreads and I can't anywhere close to 500 seconds that jest offers. Jest is set to use only 25% of CPUs

2024-06-12T21:29:17.058Z] > insights@0.0.0-build.2215 test:CI

[2024-06-12T21:29:17.058Z] > hyperfine --parameter-scan num_threads 1 15 -m 3 -w 1 'CI=true node --no-compilation-cache --max-old-space-size=1024 node_modules/vitest/dist/cli.js --coverage.enabled=true --minWorkers={num_threads} --maxWorkers={num_threads}'

[2024-06-12T21:29:17.058Z]

[2024-06-12T21:29:17.479Z] Benchmark 1: CI=true node --no-compilation-cache --max-old-space-size=1024 node_modules/vitest/dist/cli.js --coverage.enabled=true --minWorkers=1 --maxWorkers=1

[2024-06-12T23:20:46.224Z] Time (mean ± σ): 1670.071 s ± 31.991 s [User: 1676.073 s, System: 211.337 s]

[2024-06-12T23:20:46.224Z] Range (min … max): 1635.569 s … 1698.753 s 3 runs

[2024-06-12T23:20:46.224Z]

[2024-06-12T23:20:46.224Z] Benchmark 2: CI=true node --no-compilation-cache --max-old-space-size=1024 node_modules/vitest/dist/cli.js --coverage.enabled=true --minWorkers=2 --maxWorkers=2

[2024-06-13T00:26:24.496Z] Time (mean ± σ): 983.366 s ± 1.193 s [User: 1886.827 s, System: 246.599 s]

[2024-06-13T00:26:24.496Z] Range (min … max): 981.991 s … 984.117 s 3 runs

[2024-06-13T00:26:24.496Z]

[2024-06-13T00:26:24.496Z]

[2024-06-13T00:26:24.496Z] Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.

[2024-06-13T00:26:24.496Z] Benchmark 3: CI=true node --no-compilation-cache --max-old-space-size=1024 node_modules/vitest/dist/cli.js --coverage.enabled=true --minWorkers=3 --maxWorkers=3

[2024-06-13T01:22:56.433Z]

[2024-06-13T01:22:56.434Z] Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.

[2024-06-13T01:22:56.434Z] Time (mean ± σ): 816.370 s ± 7.728 s [User: 2123.947 s, System: 300.907 s]

[2024-06-13T01:22:56.434Z] Range (min … max): 807.455 s … 821.154 s 3 runs

[2024-06-13T01:22:56.434Z]

[2024-06-13T01:22:56.434Z] Benchmark 4: CI=true node --no-compilation-cache --max-old-space-size=1024 node_modules/vitest/dist/cli.js --coverage.enabled=true --minWorkers=4 --maxWorkers=4

[2024-06-13T02:13:26.833Z] Time (mean ± σ): 755.576 s ± 3.423 s [User: 2278.970 s, System: 334.686 s]

[2024-06-13T02:13:26.833Z] Range (min … max): 751.833 s … 758.549 s 3 runs

[2024-06-13T02:13:26.833Z]

[2024-06-13T02:13:26.833Z] Benchmark 5: CI=true node --no-compilation-cache --max-old-space-size=1024 node_modules/vitest/dist/cli.js --coverage.enabled=true --minWorkers=5 --maxWorkers=5

[2024-06-13T03:08:30.456Z] Time (mean ± σ): 826.621 s ± 2.273 s [User: 2315.224 s, System: 369.573 s]

[2024-06-13T03:08:30.456Z] Range (min … max): 824.226 s … 828.748 s 3 runs