armanbilge / calico

Pure, reactive UI library for Scala.js
https://armanbilge.github.io/calico
Apache License 2.0
116 stars 9 forks source link

Make `Html` typeclass go away #182

Open armanbilge opened 1 year ago

armanbilge commented 1 year ago

Building on the ideas in https://github.com/armanbilge/calico/issues/180, I wonder if we can make the Html typeclass go away. So that it is possible to build applications using only fs2.dom.Dom with Concurrent (this will require some performance hacks 😁).

The main ergonomic win will be that this sort of import should no longer be necessary. https://github.com/armanbilge/calico/blob/bec9b18aea64d3abb75a1798bb6a7b1e05c29ceb/todo-mvc/src/main/scala/todomvc/TodoMvc.scala#L65-L66

This will also be good for SVG and web components. Basically, the vision is that you should be able to make any kind of component, whether HTML, SVG, web components, or whatever, without needing dedicated typeclasses for them. You should just be able to import tags and attributes and get started.

hejfelix commented 1 year ago

Sounds good on the surface, while I can't comment on the implications 😁

I certainly have that exact import in my code and it does indeed suck 😁

armanbilge commented 1 year ago

As much as I like this idea, the only thing I can't figure out is how to avoid introducing a bunch of allocations. The nice thing about the Html[F] is that inside of it, it caches a bunch of modifier allocations. Since once the F[_] is known, you can allocate the modifier exactly once, and re-use it.

If modifiers are now in the global scope (instead of imported from Html[F]) then they will need to be re-allocated every time they are used, since they cannot hard-code the F[_].

On the other hand, we are probably drowning in allocations as it is (Resources, effects, etc.) so it may or not make a huge difference. But I've tried really hard to reduce allocations 😂

In fact, if we continue along these lines, then we can make the Dom[F] typeclass go away as well. The only reason it exists, is so that we can use opaque types instead of allocating a wrapper around every single HtmlElement.

Yet another ergonomic vs performance trade-off 🤔

hejfelix commented 1 year ago

It might make sense to set some performance goals or build some benchmarks. In React, you'd drop down to regular js/ts for performance heavy components. Same as in game engines - particle systems don't create a game object for each particle, so unless there's a clear performance goal, I'd almost always prefer ergonomics and leave an escape hatch for performance critical components.

armanbilge commented 1 year ago

Yeah, it's good perspective. We currently have this TodoMvc-based benchmark:

https://armanbilge.github.io/react-angular-ember-elm-performance-comparison

(It says Calico 0.1.1 but actually it's the latest live TodoMvc based on 0.2.0-M4. Should fix that.)

We are definitely the slowest there.

I've also done some informal benchmarking: when creating large numbers of elements, there is visible latency.

For example, try adding ~ 100 todos, and switching between the "active" and "completed" tabs.

https://armanbilge.github.io/calico/todomvc/index.html

On my machine there is visible latency when switching from "completed" to "active".

hejfelix commented 1 year ago

Interesting. I don't know much about profiling js, let alone Scala.js, but I guess it would be good to understand if allocations are actually the culprit or not. 100 elements sounds very low IMHO

hejfelix commented 1 year ago

On my phone in Safari, Calico performs faster than react in that benchmark

2chilled commented 1 year ago

Diffing React and Calico benchmark profiling results is very interesting. Unfortunately it takes a lot of time to map results back to actual Scala code.

One interesting data point is CSS color transitions, which take a lot of time in Calico and almost zero in React. Then there's a lot of time spent performing Calicos microtasks, and again almost zero in React. The list goes on.

From the Scala side, performance reviewing Children could be worth it. Since my Calico app still performs fast enough it's no priority for me yet. But I think the potential for improvements is still there, despite of @armanbilge's conscientious savings of evil allocs ;)

Ah, and I think we have not yet seen effects of performance improvements made in cats-effect, have we?

armanbilge commented 1 year ago

Since my Calico app still performs fast enough it's no priority for me yet.

This is very good news, and for me the most important factor. By-design, Calico will never have 100% optimal performance, because so much of our stack is shared with JVM and not optimized specifically for browser JS.

I'm doing my best to optimize the core runtime in Cats Effect, but there are many layers in-between. If we went crazy we could replace FS2 etc., but that IMO defeats the goal of Calico, which is that it is re-uses these familiar libraries, so you can share that knowledge and ecosystem.

For the most part, performance seems reasonable. I believe latency becomes most visible when constructing many elements all at the same time (e.g., when first loading the page, or when navigating to a view with lots of new components to render). However, Calico should be very good at making precise updates to an existing page, and hopefully there is not much performance concern in that case.


but I guess it would be good to understand if allocations are actually the culprit or not. 100 elements sounds very low IMHO

Well, I mean ~100 todo items. Each todo item is made up of about 4 HTML elements (container, checkbox, label, delete button). It also involves several signals and listeners, each of which run on their own fiber.

Creating a todo item involves applying multiple modifiers. Every individual modifier e.g. cls := "toggle" relies on countless allocations: allocating the modifier parameters, allocating the thunk that runs the side-effect, allocating the IO around it, allocating the Resource.eval(...) around that, allocating a couple Resource.flatMap(...) and lambdas around that, so it can be sequenced in the component constructor.

All of this, just to do cls := "toggle" :) multiply that a few times and we are talking about tens of thousands of allocations.

I have thought about this issue a bit. To make an improvement, we would have to make some non-trivial changes to how Modifiers are handled. For example, the := could somehow be batched together in a single Resource.eval(...), since they never involve resources (unlike <-- or --> which start background fibers). But implementing such a thing could be hell.


One interesting data point is CSS color transitions, which take a lot of time in Calico and almost zero in React.

This is very interesting. I'm pretty surprised about it actually. Would you mind opening an issue with an example?


Then there's a lot of time spent performing Calicos microtasks, and again almost zero in React.

A "microtask" is not anything special. It's just a task, that happens to be scheduled on the browser microtask queue (for immediate execution, prior to re-rendering). So really, there's a lot of time spent performing Calico tasks.

Ah, and I think we have not yet seen effects of performance improvements made in cats-effect, have we?

No, we have not yet. We are still waiting for Cats Effect v3.5.0 with those improvements. In fact, I would say this is the main thing currently blocking Calico v0.2.0 final. At best, I expect modest improvements, but we'll find out.

The new runtime submits multiple fibers as a "batch" to the microtask queue, instead of allocating a wrapper around each and submitting one at time. So we may see an improvement from that, since it decreases the overhead of starting new fibers.


hejfelix commented 1 year ago

I'm doing my best to optimize the core runtime in Cats Effect, but there are many layers in-between. If we went crazy we could replace FS2 etc., but that IMO defeats the goal of Calico, which is that it is re-uses these familiar libraries, so you can share that knowledge and ecosystem.

Yeah that was my thought as well when I was trying to build something similar. I had a bunch of protocol and websocket code on the serverside with fs2 but then I was using laminar with airstream on the client. It's much easier to share some protocol code between server and client now that I'm using fs2 in both places.

armanbilge commented 1 year ago

After thinking about this a bit ... I think some issues got conflated. The problem is not the Html typeclass. The problem is this:

import html.{*, given}

That is annoying. However F[_]: Html is probably fine.

So basically, we just need a way to use the Html typeclass without having to import its members. Fortunately, that's exactly the problem that https://github.com/armanbilge/calico/issues/180 solves. Furthermore, this would let us keep all the caching and avoid those allocations 😅