Maxim-Mazurok / dev-bench

Benchmark performance of PCs/Laptops/WSL/etc when working on NodeJS-based Front-End projects.
1 stars 0 forks source link

Dev Bench

Designed to benchmark performance of PCs/Laptops/WSL/etc when working on NodeJS-based Front-End projects.

Getting Started

  1. Clone this repo
  2. Install Python3 (sudo apt install python-is-python3 on WSL)
  3. Install pip (sudo apt install python3-pip on WSL)
  4. Install nodeenv: pip install nodeenv
  5. Run npm ci to install deps
  6. Copy config.example.ts to config.ts
  7. Modify config.ts to your liking (add projects, commands, optionally patches, etc.), see Configuration
  8. Run npm start (or npm start -- --run-indefinitely)
  9. See results in CLI (mean ± standard deviation):

    Benchmarking "build"...
    Average: 10s ±132ms
    Benchmarking "unit test"...
    Average: 45s ±12s

    and more details in results.json file

CLI Options

--run-indefinitely - when set, will re-run benchmarks for all projects until you stop the process manually (using Ctrl+C). Useful for when you can leave device running for long and want to get more precise benchmark results. Note: afterAll() hooks won't run in this case, which might affect reporters

Configuration

Project configuration

Available options:

Commands

Commands are what being benchmarked, common examples: npm ci, npm test, npm run build, etc.

Every command needs a name.

Types of commands (only one per command):

Patching

Patching can be useful to disable certain tests, change scripts, engines, etc. It's run right after cloning, before installing nodeenv and npm modules.

Available patching options:

Note: all patching options are exclusive

Extensibility

The system supports multiple reporters that extend Reporter class.

Available reporters:

All reporters have to implement collectResult() method that is called after each command is benchmarked.

Some reporters may choose to implement afterAll() method that is called after all benchmarks are done for all projects.

Troubleshooting

TODO