lukejacksonn / oceanwind

Compiles tailwind shorthand into css at runtime. Succeeded by Twind.
https://twind.dev
264 stars 12 forks source link

Adds caching for rule sets and rule translations #36

Open lukejacksonn opened 3 years ago

lukejacksonn commented 3 years ago

Addresses #4 by adding a cache for both rule sets and individual rules applied under a given theme. I'm not 100% sure wether this is necessary or even beneficial with regards perf in real life scenarios.

It will increase the memory footprint somewhat but will make repeat lookups for rules and rule sets O(1). Probably the most advantageous aspect of this is skipping calls to merge (which is deep) and the creation of variants which also involves object creation/manipulation which might also be quite costly.

If anyone has any insight into wether this is a good or bad thing to do, or if there is a better approach we could take here, then I am all ears!

bebraw commented 3 years ago

You can measure the impact with https://www.npmjs.com/package/tachometer . I am not sure of the exact setup in this case but I've found the tool somewhat useful for figuring out if a change is beneficial or not.

lukejacksonn commented 3 years ago

Thanks, yeh I tried that but the tests I were running seemed inconclusive.. this setup seems a bit more accurate https://github.com/kenoxa/beamwind/tree/main/benchmarks. I would also like to see how this PR affects these results.

bebraw commented 3 years ago

Ah, yeah. That's an interesting benchmark. Do you understand what's making the difference so big?

lukejacksonn commented 3 years ago

After speaking with @sastan about it we think it might just be this caching feature.. that is why I am curious to see how this PR compares in the test.

However, beamwind takes a totally different approach to translation lookup and variant application to oceanwind. It is really quite interesting to see. We plan on having a meeting next week to discuss how we can combine efforts generally.

Although we have both taken quite different approaches to the problem, I think we have the same goals and there is a nice middleground to be had here somewhere.

If you would like to be involved in the conversation then I'd appreciate you input!

bebraw commented 3 years ago

If you would like to be involved in the conversation then I'd appreciate you input!

Awesome news. Ping me then. 👍

sastan commented 3 years ago

I have re-run the benchmark with this PR and i think it looks a little bit better. But not as much as we have expected:

# Setup and first insert
  oceanwind x 15,868 ops/sec ±2.55% (79 runs sampled)
  beamwind x 48,841 ops/sec ±19.29% (77 runs sampled)

# Strings
  oceanwind x 13,186 ops/sec ±9.06% (78 runs sampled)
  beamwind x 80,186 ops/sec ±11.62% (69 runs sampled)

# Arrays
  oceanwind x 96,961 ops/sec ±8.58% (72 runs sampled)
  beamwind x 828,937 ops/sec ±4.95% (79 runs sampled)

# Objects
  oceanwind x 80,148 ops/sec ±5.16% (80 runs sampled)
  beamwind x 591,048 ops/sec ±4.06% (83 runs sampled)

# Grouping
  oceanwind x 14,730 ops/sec ±20.15% (47 runs sampled)
  beamwind x 175,654 ops/sec ±4.66% (84 runs sampled)

Maybe something with the benchmark is not right or favors beamwind.

I have it pushed the update in the benchmark folder: https://github.com/kenoxa/beamwind/tree/main/benchmarks

lukejacksonn commented 3 years ago

Thanks @sastan I also ran the same tests and found similar results. I'm going to start stubbing parts of the code out to try find out where the bottleneck is.. have considered using https://github.com/lukeed/clsx function for the argument normalizing as it essentially does everything we need.

sastan commented 3 years ago

clsx is a very good choice for this.