lukejacksonn / oceanwind

Compiles tailwind shorthand into css at runtime. Succeeded by Twind.
https://twind.dev
264 stars 12 forks source link

Can you explain the benefits better? #31

Closed frederikhors closed 3 years ago

frederikhors commented 3 years ago

It is not clear to me what is the advantage of using this package instead of the "classic" Tailwind with the final bundle, maybe even with purgecss.

Can you explain the benefits better?

Thanks, really.

lukejacksonn commented 3 years ago

No worries and thanks for taking the time to file this issue.

So the main benefit here is that you get production ready, optimized styles without any build (purge) step! Instead of starting with all possible styles and removing what you don't need through purge.. oceanwind starts with nothing and only generates the styles you need. It does this at runtime (or at serve time if you are using SSR) with no configuration.

So that is oceanwind attempting to reach feature parity with tailwind. From a developer perspective there should be almost no difference in API. But because we are operating in JS land we can take this a few steps further and start exploring features that would be impossible by nature if you were using "classic" tailwind.

Things like:

You get all of this for less than 10kb of dependency, that can be added to your project with a single line of code:

import ow from 'https://unpkg.com/oceanwind'

Thats about all I got 😅 does that answer your question?

frederikhors commented 3 years ago

Ok. Thanks a lot.

Then I understood correctly.

Great idea. What scares me now is the performance, especially on slow devices.

The ideal would be to have a comparison between classic CSS Tailwind and the Oceanwind variant, all measured in the browser in a scientific manner.

Unfortunately, many projects I work on are run on very slow devices and when possible I prefer the browser to do its job, managing the CSS better, especially to free up the main thread.

lukejacksonn commented 3 years ago

No worries and yes, I can see that perf might be a concern. We did benchmark the translate function itself once and found that it definitely wasn't "slow" by any means. That said, a few things have changed since then so I'm not going to state numbers here. I will have to find another way of testing it.

If you ever end up trying it out then I'd like to hear your experiences!