nextui-org / tailwind-variants

🦄 Tailwindcss first-class variant API
https://tailwind-variants.org
MIT License
2.42k stars 68 forks source link

feat: improving performance #68

Closed TIMMLOPK closed 1 year ago

TIMMLOPK commented 1 year ago

Description

65

  1. use spread instead of Object.assign()
  2. use for...in instead of foreach
  3. use stack to replace .flat()

Although this pr doesn't make tv faster than cva, it is improved

Additional context

Base on the #65 benchmark tests

Before

CVA x 854,053 ops/sec ±3.37% (85 runs sampled)
CVA with tailwind-merge x 459,752 ops/sec ±2.91% (87 runs sampled)
TV with slots x 80,733 ops/sec ±5.49% (93 runs sampled)
TV without slots x 95,664 ops/sec ±0.72% (91 runs sampled)
Fastest is CVA

Now

CVA x 897,001 ops/sec ±1.46% (90 runs sampled)
CVA with tailwind-merge x 471,939 ops/sec ±1.20% (93 runs sampled)
TV with slots x 151,139 ops/sec ±1.67% (93 runs sampled)
TV without slots x 172,622 ops/sec ±0.72% (92 runs sampled)
Fastest is CVA

What is the purpose of this pull request?

Before submitting the PR, please make sure you do the following

jrgarciadev commented 1 year ago

Hey @TIMMLOPK thank you!, please check the tests, almost all of them are failing

TIMMLOPK commented 1 year ago

Fixed, I would update the benchmark later

TIMMLOPK commented 1 year ago

New benchmark runs on my PC(Windows 11, 4 core, node v18.15.0)

Before

CVA x 854,053 ops/sec ±3.37% (85 runs sampled)
CVA with tailwind-merge x 459,752 ops/sec ±2.91% (87 runs sampled)
TV with slots x 80,733 ops/sec ±5.49% (93 runs sampled)
TV without slots x 95,664 ops/sec ±0.72% (91 runs sampled)
Fastest is CVA

Now

CVA x 897,001 ops/sec ±1.46% (90 runs sampled)
CVA with tailwind-merge x 471,939 ops/sec ±1.20% (93 runs sampled)
TV with slots x 151,139 ops/sec ±1.67% (93 runs sampled)
TV without slots x 172,622 ops/sec ±0.72% (92 runs sampled)
Fastest is CVA
Handfish commented 1 year ago

@TIMMLOPK @jrgarciadev I made a pull request to @TIMMLOPK branch as I worked off his progress to code golf this forward a little bit. I found that flattening the arrays with recursion squeezed out some more ops/sec.