Pauan / rust-dominator

Zero-cost ultra-high-performance declarative DOM library using FRP signals for Rust!
MIT License
956 stars 61 forks source link

Better readme, and Rust discussion #16

Open deklanw opened 4 years ago

deklanw commented 4 years ago

Hi,

I'm interested in FRP and Rust so this looks up my alley. But, the readme isn't giving me much. What kind of FRP? Do you have continuous-time? Higher order? Cycles? Purity?

Continuing https://github.com/krausest/js-framework-benchmark/pull/589#issuecomment-511630228

Yeah, I use TypeScript at work. It's... better than JS, but not very good. Too many quirks, unsound type system, lack of useful features (typeclasses, nice ADTs, etc.), poor syntax (inherited from JS).

I used to use PureScript, then I switched to Rust because PureScript is indeed very slow (understatement), with no intention of improving.

You don't address Reason here. Have you tried it out? Good type system, pretty good performance. Compiler is much faster than Rust's. Good middle ground.

Also, all the stuff about doing performance-critical parts in Rust: I agree. : )

Pauan commented 4 years ago

But, the readme isn't giving me much.

Sorry about that, improving the docs is definitely high up on my TODO list.

What kind of FRP? Do you have continuous-time? Higher order? Cycles? Purity?

It's a hybrid push-pull system, combining the correctness of pull with the performance of push. Therefore it supports both continuous time and events.

It is higher-order (and avoids the issues that Elm had with higher-order Signals). It probably supports cycles, though I haven't actually tried.

The Signals system itself is pure, but this is Rust we're talking about here, so there is some impurity at the edges. With some effort you could remove even that impurity, but I haven't felt the need to do so.

And there's nothing stopping you from doing impure things if you want to (you actually can't prevent impurity in Rust). It's pragmatic, not purity at any cost. So it's more in the line of ML and Clojure: try to be pure, but don't obsess about it.

Aside from being a solid and correct implementation, the biggest benefit is that it's fast, it's designed as a zero-cost abstraction, so it should blow the pants off of every other FRP implementation. And it has seamless support for Futures and Streams, which is really nice.

You don't address Reason here. Have you tried it out?

Yup, like I said I looked into every language you listed (plus Haskell, F#, Idris, Koka, Haxe, ClojureScript, BuckleScript, NoFlo, Shen, Ur, Opa, Nim, various Lisps, etc.). I was so desperate I even designed my own language (which I abandoned after I learned Rust).

Reason was high on my list, but I don't like ML, it has funky stuff and using higher-order modules to do things is really verbose. And then there's the issues with the overall ML ecosystem...

Reason doesn't fix any of that, though it does fix some of the more insane parts of ML syntax. On the other hand it adds in its own syntax quirks, like switch, so...

deklanw commented 4 years ago

It's a hybrid push-pull system, combining the correctness of pull with the performance of push. Therefore it supports both continuous time and events.

How does this compare to Carboxyl? (I don't know much about it, just glanced at it)

Yup, like I said I looked into every language you listed (plus Haskell, F#, Idris, Koka, Haxe, ClojureScript, BuckleScript, NoFlo, Shen, Ur, Opa, Nim, various Lisps, etc.). I was so desperate I even designed my own language (which I abandoned after I learned Rust).

That's quite the bunch. Some of those aren't even production-ready. What exactly were you searching for when you were trying those out? Surely, you must admit, that Rust isn't a superior approach for everything than some of those. Although I will admit it's remarkably versatile.

Reason was high on my list, but I don't like ML, it has funky stuff and using higher-order modules to do things is really verbose. And then there's the issues with the overall ML ecosystem... Speaking of verbose, I've worried something similar about both Rust AND FRP.

For TodoMVC (removed comments), Svelte: 149 lines, 1913 characters without white space. Rust-Dominator: 429 lines, 6880 characters without white space.

That's quite an extreme difference. Other FRP frameworks/libraries I've looked at are similarly verbose. Obviously there's more to judging languages or paradigms than verbosity but it's something to think about.

Pauan commented 4 years ago

How does this compare to Carboxyl?

I had taken a look at all of the Rust FRP libraries before making my own.

All of them (including carboxyl) follow the same pattern: extensive boxing, lots of Arc and Mutex everywhere, lots of Vec everywhere... basically awful performance. Meanwhile, my Signals are fully stack allocated: no Box, no Arc, no Mutex, no Vec.

To its credit, Carboxyl does use push-pull based Signals, however its design is wrong: it pushes the new value, when it's supposed to always pull the new value. This causes major performance and correctness issues.

It also always Clones the value, whereas my Signals have many options for avoiding Clone.

What exactly were you searching for when you were trying those out?

This was basically the criteria I used to judge all the languages I looked at:

Rust fits every one of those except "purely functional". And Rust's type system (and separation between & and &mut) means that it often "feels" extremely functional, even when you're doing mutation.

This isn't a coincidence: in a purely functional language, if you have a function which takes an object and returns a new object, and there's only a single reference to the input object, then a smart compiler can actually optimize that into a mutation. Even though it internally uses mutation, it's still referentially transparent!

So the fact that Rust requires there to be at most one &mut reference to an object is actually the same as a purely functional language!

As for syntax, that's obviously subjective. I'm quite flexible when it comes to syntax: I like Lisp syntax, I like Haskell syntax, I like F# syntax, and I like Rust syntax. I don't like ML, C#, C, C++, or JavaScript syntax.

Is Rust's syntax my favorite? No, but it's reasonable enough.

Surely, you must admit, that Rust isn't a superior approach for everything than some of those.

It's true that Rust has its share of warts and drawbacks (not as many as you would think though).

In my head I can imagine a language which is superior to Rust. But that hypothetical language is only slightly better than Rust, and it would take years for me to actually make it.

So, spending years to make a new language which is slightly better than Rust, or just use Rust now... it's a pretty easy choice.

For TodoMVC (removed comments),

To make a fair comparison, you have to remove 28 lines from dominator, because dominator includes the footer (whereas the others don't).

It's true that Svelte is quite small (it's actually one of the smallest ones). To give some perspective (with comments and blank lines removed):

As you can see there's a wide variety, even with similar frameworks (Dominator and Svelte are the only ones that use FRP, the rest all use vdom).

Svelte in particular is so short because it's actually a new language: it's similar to JavaScript but it adds in some new syntax and you have to compile it before you can use it. Whereas dominator is just... regular Rust. Naturally if you create a new language specialized to FRP then it will be shorter than an FRP library for an existing language.

Can dominator be made shorter? Absolutely! Right now you write dominator code like this:

html!("div", {
    .attribute("foo", "bar")
    .attribute("qux", "corge")
    .children(&mut [
        html!("span", {
           .text("Lorem ipsum dolor sit amet")
        }),
        html!("span", {
            .text("consectetur adipiscing elit")
        }),
    ])
})

I like this style, I think it reads nicely and makes refactoring easier. But it is possible to make a macro which instead allows you to write this:

dom! {
    <div foo="bar" qux="corge">
        <span>Lorem ipsum dolor sit amet</span>
        <span>consectetur adipiscing elit</span>
    </div>
}

These macros actually already exist, so they just need to be repurposed to work with dominator. And by using those JSX-style macros, you could easily make dominator much shorter.

So I don't think this is an FRP issue, it's a stylistic issue. Notice that Elm is also quite verbose, despite not using FRP. That's because it lacks a JSX-style macro.

Another difference is that both Dominator and Elm have static typing. That adds some extra lines because you have to define your types (e.g. the Filter, Todo, and State type definitions take up 21 lines).

I was curious, so I tried rewriting the dominator example using the JSX style macro (and removing the type definitions), and it's 235 lines, making it shorter than Mithril and React.

At that point most of the excessive baggage is actually event listeners (the FRP only takes up 20 extra lines).

limira commented 4 years ago

For TodoMVC (removed comments), Svelte: 149 lines, 1913 characters without white space. Rust-Dominator: 429 lines, 6880 characters without white space. That's quite an extreme difference

Adding to what Pauan already answer, I feel it is a bit unfair comparing verbosity of a dynamic type language to a static type language. For small apps, static languages always appear more verbose (have to define things like struct TodoItem {...}, struct AppState {...}, initialization...). Combine with the fact that Dominator don't use JSX syntax, it give an impression of extreme difference. But for a big app, static language help greatly. In medium size apps, static language still give great help in code maintenance.

Edit I somehow missed this in Pauan's comment (render my comment redundant):

Another difference is that both Dominator and Elm have static typing. That adds some extra lines because you have to define your types (e.g. the Filter, Todo, and State type definitions take up 21 lines).

deklanw commented 4 years ago

I had taken a look at all of the Rust FRP libraries before making my own.

Ah. If I ever get into Rust FRP I'll come back to this comment. Could be useful to rephrase what you wrote into the readme :P

This was basically the criteria I used to judge all the languages I looked at:

Yes, but for what? Based on what you said it seems like you mean you were looking for one language which you could use for basically everything. So a versatile general-purpose language. I would agree that Rust seems great for that. Perhaps a bit fussy, imo, for higher level tasks. But, still it's a great option.

Have you looked at Red lang? It's trying to be a full-stack multi-purpose high-performance cross-platform language. The ecosystem is basically non-existent, but given your goals you might find it interesting, https://www.red-lang.org/

If I need to do some quick and dirty adhoc stuff I'd probably use bash or Python. If I need to query some data in a store I'd probably use SQL or Datalog. Etc. DSLs are useful:

Svelte in particular is so short because it's actually a new language: it's similar to JavaScript but it adds in some new syntax and you have to compile it before you can use it. Whereas dominator is just... regular Rust. Naturally if you create a new language specialized to FRP then it will be shorter than an FRP library for an existing language.

Yes. I would consider it a positive. Although, as far as I can tell, Svelte isn't FRP in the Conal sense or the Rx sense. It's just reactive.

As you can see there's a wide variety, even with similar frameworks (Dominator and Svelte are the only ones that use FRP, the rest all use vdom).

This is a weird distinction: Reflex and CycleJS are FRP and VDOM. Turbine and Dominator are FRP and no VDOM. Svelte, Surplus, and Solid are reactive, not FRP, and no VDOM. Imba is not reactive, not FRP, and no VDOM. React is not reactive, not FRP, and VDOM.

The only correlation I see is that you can use reactivity (functional or otherwise) (compile-time or run-time) to avoid having a VDOM. But, you can also just have a VDOM anyway.

It's true that Svelte is quite small (it's actually one of the smallest ones). To give some perspective (with comments and blank lines removed):

I don't think line counts alone are useful. For example, if you look at just line counts then it appears that Reflex blows Dominator away in conciseness (disregarding footer stuff). 429 vs 211. But, if you look at characters without whitespace it's basically the same: 6880 vs 6512. I think it's most useful to look at all the verbosity measures you can. (Looking at the Reflex codebase reinforces that their lines are super dense in places).

    (clearCompletedAttrsButton, _) <- elDynAttr' "button" clearCompletedAttrs $ dynText $ ffor tasksCompleted $ \n -> "Clear completed (" <> T.pack (show n) <> ")"
  let combineItemChanges = fmap (foldl' (.) id) . mergeList . map (\(k, v) -> fmap (flip Map.update k) v) . Map.toList

How do you modularize with Dominator? Looking at that example I see an ungodly level of nesting. I might not mind the more verbose representation of markup if I wasn't reading 21 (is that right?) indentations before .blur().

Another difference is that both Dominator and Elm have static typing. That adds some extra lines because you have to define your types (e.g. the Filter, Todo, and State type definitions take up 21 lines).

You're right about the typing. Although, I'm sure if you added types to the Svelte example it would still be half the length in every measure. The Elm example is also ~1400 fewer characters than Dominator.

I think this kind of comparison would be interesting for a larger app like RealWorld app. But, many smaller libraries don't have an implementation. Turbine, Dominator, Solid don't atm, I know. Reflex just recently got an implementation apparently? https://github.com/qfpl/reflex-realworld-workshop

These macros actually already exist, so they just need to be repurposed to work with dominator. And by using those JSX-style macros, you could easily make dominator much shorter.

I don't know how Dominator works (documentation!), so bear with me. How would you represent your .event or all of your .signal varieties with JSX? FWIW, Turbine is looking into supporting JSX and I think they're trying to figure out the most elegant way to do the same (it seems to me) https://github.com/funkia/turbine/issues/75

Pauan commented 4 years ago

Yes, but for what? Based on what you said it seems like you mean you were looking for one language which you could use for basically everything. So a versatile general-purpose language.

No, I mentioned good JS support because my goal is to write web apps and Chrome / Firefox extensions (that requirement excludes most languages).

I have ~13 years of experience using JS, and my goal is to replace JS with a far better language (more maintainable, easier to refactor, less chance of bugs, high-level features, etc.)

Rust works fantastic for that. The other languages I tried don't (because they fail at the various criteria I listed earlier).

Have you looked at Red lang?

I haven't heard of it, but taking a look at it, it seems interesting but it fails many of my criteria:

If I need to do some quick and dirty adhoc stuff I'd probably use bash or Python. If I need to query some data in a store I'd probably use SQL or Datalog. Etc. DSLs are useful

Okay, but my use case is creating large web apps. I am aware that DSLs are useful (I used to be a Lisp programmer after all, the king of DSLs).

Although, as far as I can tell, Svelte isn't FRP in the Conal sense or the Rx sense. It's just reactive.

Indeed, that is true. It also doesn't seem to have support for FRP lists, which means its performance will degrade significantly for large lists. And it seems working with lists in general is a bit awkward.

Dominator does not have that problem, because it has SignalVec, which means changes to the DOM are O(1) cost rather than O(n) cost.

The only correlation I see is that you can use reactivity (functional or otherwise) (compile-time or run-time) to avoid having a VDOM. But, you can also just have a VDOM anyway.

Yes, that is true. FRP enables you to avoid a vdom (and thus improve the performance), but you can mix and match FRP and vdom as desired.

I think it's most useful to look at all the verbosity measures you can.

That is true, but quite frankly I'm not interested in putting in the effort to do an in-depth comparison analysis: there's many things that would need to be taken into account, such as different styles, modularity, correctness, maintainability, runtime performance, scalability, etc.

And since most of the examples use JavaScript, I'm not interested in them in the first place. I'm perfectly happy to trade a bit of extra verbosity if it means I can avoid the issues of JavaScript.

I've been quite content writing large web apps with dominator, it's been a much nicer experience than anything else I've tried (and I've tried a lot).

How do you modularize with Dominator?

You just use regular structs/functions/methods/modules, the same way you modularize any Rust code.

That is a good point that having a single 400 line file is intimidating, so I should split it up into separate files.

I think this kind of comparison would be interesting for a larger app like RealWorld app.

Oh, I hadn't heard of that! I should make a Dominator implementation of it.

How would you represent your .event or all of your .signal varieties with JSX?

That's a good question. I'm personally not interested in JSX, so I haven't thought too hard about it, but I imagine it would look something like this:

dom! {
    <button class="clear-completed" visible={state.is_visible.signal()}
        event: clone!(state => move |_: events::Click| {
            state.todo_list.lock_mut().retain(|todo| todo.completed.get() == false);
            state.serialize();
        })>
        "Clear completed"
    </button>
}

Basically, anything inside {} would be a Signal (or SignalVec for children), and event: would be used to create events.

Pauan commented 4 years ago

I've improved the TodoMVC example so that it's properly modularized into separate files:

https://github.com/Pauan/rust-dominator/tree/master/examples/todomvc/src

limira commented 4 years ago

Another difference is that both Dominator and Elm have static typing. That adds some extra lines because you have to define your types (e.g. the Filter, Todo, and State type definitions take up 21 lines).

You're right about the typing. Although, I'm sure if you added types to the Svelte example it would still be half the length in every measure. The Elm example is also ~1400 fewer characters than Dominator.

@Pauan said "Another difference...", but @deklanw's comment give me the impression that you see it as the only difference. (The lack of JSX-like syntax also contribute to the verbosity)

I speak here is from my personal experience, @Pauan's thoughts may differ. When it comparing languages (or other things), I tend to base on the benefit I get from it. For example (just name a few):

When languages meet the base-benefit, then I consider about other things. I love conciseness. But static typing must come before it, so Svelte is never on the table.

@Pauan said:

Reason was high on my list, but I don't like ML, it has funky stuff and using higher-order modules to do things is really verbose. And then there's the issues with the overall ML ecosystem...

I guess they was comparing the verbosity of ML languages with the language of their choice (not about the verbosity of a framework with another framework). And conciseness is not just about the number of lines, number of characters, but also the feel (I sorry that I am not a native English speaker to say the exact thing that I want to express here - what the feeling is?!?).

Rust is able to compile to wasm, but it is not designed for thing like HTML. So, its own syntax will never have the conciseness of other specialized languages. If someone want to reduce the number of lines when describing the view of a front-end app, macros can always help. But it has downsides: macro's content is not auto-format and bad compile error. Those are the reason some people accept the verbosity of a Rust when building front-end frameworks. Beside dominator, draco and sauron do not support JSX-like macro.

deklanw commented 4 years ago

Indeed, that is true. It also doesn't seem to have support for FRP lists, which means its performance will degrade significantly for large lists. And it seems working with lists in general is a bit awkward.

Svelte doesn't support FRP anything because it's not FRP, as I understand it. If I understand correctly it uses keyed reconciliation of lists just like a VDOM. As I understand, that's fast. How do lists work in Dominator?

No, I mentioned good JS support because my goal is to write web apps and Chrome / Firefox extensions (that requirement excludes most languages). ... Okay, but my use case is creating large web apps. I am aware that DSLs are useful (I used to be a Lisp programmer after all, the king of DSLs).

Ah, I didn't realize you were just talking about web apps. The list of langs you gave confused me. All of those aren't intended for creating web apps (or have the features you mention). Koka is a research language for algebraic effects, right? Nim does compile to JS but it's not widely used for that purpose, it's not functional and doesn't have ADTs. Idris isn't production ready and doesn't compile to JS. NoFlo is a library for FBP-inspired programming in JS. Afaik. Etc.

Yes, that is true. FRP enables you to avoid a vdom (and thus improve the performance), but you can mix and match FRP and vdom as desired.

A stronger statement: reactivity does. FRP is just one kind.

I haven't heard of it, but taking a look at it, it seems interesting but it fails many of my criteria:

Yes I was tempted to say "preferences" but switched to "goals", but I was mistaken with that if you're not interested in a general purpose language. Anyway, I hope you still find it interesting for being Rust-like in scope at least.

I've been quite content writing large web apps with dominator, it's been a much nicer experience than anything else I've tried (and I've tried a lot).

Any open-source large examples I could look at?

Oh, I hadn't heard of that! I should make a Dominator implementation of it.

Now that would be interesting. It's quite a bit more work. After the readme of course :P I would like to compare to Reflex and Svelte. Unfortunately, Turbine doesn't have one and I don't think CycleJS does either.

That's a good question. I'm personally not interested in JSX, so I haven't thought too hard about it, but I imagine it would look something like this:

That solution looks like the Turbine one. It seems strange to me though since it's not conceptually an attribute but more like an output.

I've improved the TodoMVC example so that it's properly modularized into separate files:

Less intimidating. Nice. But, still basically inscrutable (to me) without docs.

Is yours the only WebAssembly Rust FRP library?

deklanw commented 4 years ago

I speak here is from my personal experience, @Pauan's thoughts may differ. When it comparing languages (or other things), I tend to base on the benefit I get from it. For example (just name a few):

Of course. In no way do I think I'm fairly comparing languages based solely on a few measures of verbosity. It's just one aspect I wanted to discuss about one paradigm (FRP) in relation to others.

And conciseness is not just about the number of lines, number of characters, but also the feel (I sorry that I am not a native English speaker to say the exact thing that I want to express here - what the feeling is?!?).

Agree. My feel is that FRP is verbose. Looking at some empirical measures is one way for me to evaluate that feeling. Using and learning more about FRP frameworks (like Dominator!) is another :)

limira commented 4 years ago

My feel is that FRP is verbose

I work with it about 2 months now. I actually feel that it is verbose too. The verbosity in the view render (building the DOM) can help by macros, but the the verbosity when dealing with Mutable/MutableVec can not be mitigated. If a user is not crazy about performance I guess FRP (https://github.com/Pauan/rust-signals) is not for them.

Another problem is that the ratio of share code (with the back-end) is also decrease. On the back-end I use just normal data types, but on the front-end Mutable/MutableVec is everywhere make many similar code from the back-end not-reusable. @Pauan, have you ever encountered this problem? If yes, how do you solve it?

Pauan commented 4 years ago

If I understand correctly it uses keyed reconciliation of lists just like a VDOM. As I understand, that's fast.

Yes, but it's not fast, it's actually worse than O(n) performance.

This is the algorithm that Svelte uses for keyed update. It calls that every time any element in the list changes, and it also calls it anytime a sub-element in the list changes (recursively).

Let me give an example. Let's suppose you have this Svelte app:

<script>
    let foo = [
        {
            id: 1,
            bar: [
                { id: 4, isChecked: true },
                { id: 5, isChecked: false },
            ]
        },
        {
            id: 2,
            bar: [
                { id: 6, isChecked: false },
                { id: 7, isChecked: true },
            ]
        },
        {
            id: 3,
            bar: [
                { id: 8, isChecked: true },
                { id: 9, isChecked: true },
            ]
        },
    ];

    setInterval(() => {
        foo[0].bar[0].isChecked = !foo[0].bar[0].isChecked;
    }, 1000);
</script>

{#each foo as foo (foo.id)}
    {#each foo.bar as bar (bar.id)}
        <input type="checkbox" checked={bar.isChecked} />
    {/each}
{/each}

Even though it only changes a single isChecked property, it has to do a full O(n) update of the entire app. It has to update all of the elements in foo, and for each element in foo it has to update all of the elements in foo.bar. That's O(n*m).

And that's only with two levels of lists. If you have a large DOM app it's likely that you'll have many lists. So if you change one tiny thing deeply nested in the DOM, it has to do multiple O(n) updates just to change that one thing. This is the same sort of performance issue that vdom has.

Dominator doesn't need to do any of that, it knows exactly what it needs to update, so it doesn't need to do any sort of diffing or searching whatsoever. That means its performance is always O(1), no matter how big your app is, and no matter how big your lists are.

There's always a trade-off for everything. Svelte is very concise, but you pay for that with runtime performance.

How do lists work in Dominator?

You create a MutableVec, like so:

let v = MutableVec::new();

Now you can do various operations on it, similar to a Vec:

let mut v = v.lock_mut();
v.push(1);
v.push(2);
v.reverse();

And lastly you can listen for changes by calling v.signal_vec(), which returns a SignalVec. You can then plug that into dominator:

html!("div", {
    // Create a DOM node for each element in `v` and keep it in sync with `v`
    .children_signal_vec(v.signal_vec().map(|v| {
        html!("div", {
            ...
        })
    }))
})

How it works internally is that when you mutate a MutableVec, it sends a VecDiff message, which contains the minimum amount of information about the change.

For example, if you push something into the MutableVec, then it will send a VecDiff::Push message. Dominator listens for these messages and then updates the DOM accordingly.

That means DOM updates are O(1) no matter how big the MutableVec is, it means we don't need keys, and it means we don't need to do any sort of vdom diffing or replacing all of the DOM nodes (which are both slow, and cause issues with state).

All of those aren't intended for creating web apps

Rust wasn't designed for creating web apps. Even JavaScript wasn't designed for creating web apps. It's quite rare for a language to be specifically designed to create web apps. The only ones I can think of off the top of my head are Elm and Reason.

Whether a language is designed for it or not, as long as it can be used for creating web apps, and it's a good language, then that's what matters. It's easy enough to create libraries which allow for creating web apps (which is what I did with Rust).

Koka is a research language for algebraic effects, right?

Yes, but it compiles to JS, it has a JS FFI, it has a lot of JS bindings, and it can be used to create web apps.

In fact it fulfills basically all of my criteria. The biggest issue is that it isn't remotely close to production ready (it's quite buggy), which is why I chose Rust instead.

Nim does compile to JS but [...] it's not functional and doesn't have ADTs.

Yes, which is one of the reasons I didn't choose it. To be clear, the languages I listed are languages I rejected (for one reason or another). The only language I accepted is Rust, which is why I'm using it.

Idris isn't production ready and doesn't compile to JS.

For many years Idris has compiled to JS, and work is being done on WebAssembly support as well.

In fact Idris has had a JS compilation mode since at least 2013. Naturally it wasn't very good, which is one reason why I rejected Idris.

[Red lang] but I was mistaken with that if you're not interested in a general purpose language.

Even if I was interested in a general purpose language, it still fails my criteria. My criteria would not change even if I did desire a general purpose language.

Anyway, I hope you still find [Red lang] interesting for being Rust-like in scope at least.

The most interesting things about it are the built-in support for serialization/deserialization, and the wide variety of built-in data types. The rest just seems pretty standard.

Any open-source large examples I could look at?

https://github.com/Pauan/tab-organizer/tree/rust https://github.com/Pauan/SaltyBetBot

It seems strange to me though since it's not conceptually an attribute but more like an output.

That's why I chose the : syntax rather than = (which is normally used for attributes in JSX). In any case, that's all bikeshedding stuff, different syntax could be chosen instead.

Is yours the only WebAssembly Rust FRP library?

There is also mika which is based on my Signals library, but with its own DOM implementation and structure. It's a lot newer than dominator.

Pauan commented 4 years ago

@limira have you ever encountered this problem? If yes, how do you solve it?

That's a good question. I do indeed have some duplication between the client and server:

https://github.com/Pauan/tab-organizer/blob/70371dafbeb840c7ad82e77aea5bb055fedb445a/src/state.rs#L88-L97

https://github.com/Pauan/tab-organizer/blob/70371dafbeb840c7ad82e77aea5bb055fedb445a/src/bin/sidebar/types.rs#L135-L149

(Though as you can see they're not exactly the same, the client adds some extra state, and also wraps the strings in Arc)

I don't have a great solution right now, but I've been thinking about some sort of macro that would auto-generate both the client code and the server code.

The other option is of course to just use Mutable + MutableVec on both the server and client, which might not be a terrible idea.

limira commented 4 years ago

The other option is of course to just use Mutable + MutableVec on both the server and client, which might not be a terrible idea.

I prefer isolating (from the client app's state) rather than spreading them to the server. Because I don't want .get()/.get_cloned(), .lock_ref()/.lock_mut()... all over on the server code. Another concern is that some custom derives may not work, such as diesel's Queryable or Insertable, then code duplication still occurs at some point.

deklanw commented 4 years ago

it has to do a full O(n) update of the entire app. There's always a trade-off for everything. Svelte is very concise, but you pay for that with runtime performance.

In every app I've seen, the keyed list parts are isolated pieces of the app. And, as has been noted, large lists in the thousands are rare. Looking at js-framework-benchmark (flat list) it looks like Svelte and Dominator are roughly around the same speed for operations. Svelte being faster at creation, Dominator being faster at change.

I've never heard anyone mention nested list rendering. That's interesting. In what apps would you have large nested lists being rendered and individual elements being updated? I can only think of a TreeTab extension, but the number of tabs is never high.

There are other frameworks which are much faster than both Svelte and Dominator for lists. I'm interested now in how those might perform for nested lists. Never considered.

The most interesting things about it are the built-in support for serialization/deserialization, and the wide variety of built-in data types. The rest just seems pretty standard.

Object-oriented reactivity built into the language. Explicitly targeting low and high level programming. Cross-compilation, cross-platform GUIs. JIT and AOT. Homoiconic. Etc. If this is standard then I'd like to know what else does something similar to this! Admittedly I'm only half-interested in it for the same reasons that you rejected it :P I do prefer ADTs etc too

Rust wasn't designed for creating web apps. Even JavaScript wasn't designed for creating web apps. It's quite rare for a language to be specifically designed to create web apps.

I wasn't aware of Koka and Idris compiling to JS. Thanks. I suppose I should have said "there is a community trying to create web apps with it". As you note, Koka and Idris aren't suitable. The community heuristic is an easier test. Rust passes.

I admire your dedication to using a principled language. I imagine that even if there were no community using a language that you like (that compiles to JS) you would probably use it.

Those links to examples you posted: Nice. I may take a look. Would be good to put those in the readme too!

Pauan commented 4 years ago

In every app I've seen, the keyed list parts are isolated pieces of the app.

"Isolating" it doesn't change the O(n) cost. And as soon as a parent component changes a child component (whether it be by passing in props, or calling a method, or whatever) then it will trigger the O(n) update for the child component.

And, as has been noted, large lists in the thousands are rare.

Thousands might be rare. But hundreds is certainly common: just go to Reddit, YouTube, or GitHub and look at the hundreds of comments. Or any sort of chat client (e.g. Slack, Gitter). Or text editors (e.g. Atom, which used to use React but abandoned it because of performance issues).

Looking at js-framework-benchmark (flat list) it looks like Svelte and Dominator are roughly around the same speed for operations. Svelte being faster at creation, Dominator being faster at change.

It is expected that dominator would be faster at change and slower at creation (because it has to setup all of the Signals stuff).

In addition, js-framework-benchmark is just testing the performance of creating several thousand very simple DOM nodes and slapping them into the DOM. A real app with complex hierarchies, multiple lists, etc. will perform differently.

I learned that the hard way when I used Mithril to create a complex website (proprietary, so I can't share). At the time Mithril was the fastest vdom framework, far faster than anything else (according to the benchmarks).

But it would often take 1+ seconds to load a page. When I switched it to use my own custom FRP implementation, the performance issues completely vanished and now the pages loaded instantly.

This was a perfect apples-to-apples comparison, because the website itself didn't change: it was the same CSS styles, the same data, the same layout, the same everything. Literally the only thing that changed was me swapping out Mithril for my own FRP framework.

I even intentionally made the FRP API as close to Mithril as I could (using the same m function), which made the migration very easy.

Can you make fast websites with vdom? Sure, but you have to put in some work and careful planning to avoid certain performance issues. It often requires various performance hacks (keys, caching, making duplicates of data and keeping them in sync, etc.)

A great example is retrieving some data from the server and then displaying it in sorted order. You have two options:

  1. Sort the data in the render function, which is really slow (unacceptable).

  2. Sort the data outside of the render function, but now you have to keep it in sync with the original server data (maintenance and refactoring problem).

With dominator, it's as simple as calling the sort_by_cloned method, which efficiently sorts the list, and then automatically (and very efficiently) keeps the output list sorted when the input list changes.

With dominator, you get excellent performance from the start, while putting in minimal effort, without using any performance hacks, because it's designed to be fast by default, and it's designed by default to scale to very large apps. You also get excellent silky smooth 60 FPS animations with dominator, which is something that vdom really struggles with.

Animations are basically the worst-case scenario for vdom, because it involves updating a single tiny thing 60 times per second, so it really stresses the vdom diffing algorithm. The performance is so abysmal that vdom libraries always require mucking around with real DOM nodes in order to do animations.

Svelte isn't too bad for animations, since it can take advantage of reactivity. But it still has the O(n) update problem (e.g. if you want to animate a single element in a large list, which is an extremely common thing to do).

I've never heard anyone mention nested list rendering. That's interesting. In what apps would you have large nested lists being rendered and individual elements being updated?

How about adding/removing/editing a comment on a website like Reddit?

How about something like Excel (e.g. Google Docs) which displays a large grid of elements which need to be updated?

How about complex chat clients like Slack or Discord?

How about any app which displays tabular data which can be edited/added/removed?

I can only think of a TreeTab extension, but the number of tabs is never high.

I have had over 3,000 tabs at one time. If you have a small number of tabs then you don't need any sort of tab management extension, it's only when you have a large number of tabs that you need it.

Object-oriented reactivity built into the language.

Ah, I missed that. Though it doesn't seem to benefit much from being "built-in", it still has a lot of boilerplate:

a: make reactor! [
    x: 1
    y: 2
    total: is [x + y]
]

a/x: 100

What you'd really want is something like this:

x: 1
y: 2
total: x + y

x: 100

That's how it would be in a language designed for FRP.

Explicitly targeting low and high level programming. Cross-compilation, cross-platform GUIs. JIT and AOT.

There are a lot of languages that can do those things: Rust, Swift, Factor, Ceylon, Haxe, Go, Julia, Nim, various Lisps (Common Lisp in particular), OCaml, etc.

Homoiconic

Every single Lisp has that property, and homoiconicity is overrated anyways. I know this because I used various Lisps for a few years, I've created Lisp compilers, and I created my own Lisp dialect: homoiconicity isn't needed for macros.

You don't need homoiconicity, you just need quote, unquote, and ADTs, which can be implemented in any language: Rust has both Scheme-style template-based macros and also Common Lisp style procedural macros (with quote + unquote).

Haskell is another language which has support for macros (via quote/unquote), it's called Template Haskell.

I suppose I should have said "there is a community trying to create web apps with it". As you note, Koka and Idris aren't suitable. The community heuristic is an easier test. Rust passes.

The Rust web community is extremely small and young, it's only really existed for the past year or so. But I started using Rust to make web apps 2 years ago.

I don't care whether a community exists or not, I don't wait for ready-made solutions, I make my own. So it doesn't bother me at all that Koka and Idris (or any language) don't have a web community.

I imagine that even if there were no community using a language that you like (that compiles to JS) you would probably use it.

I sure would! In addition, 99+% of the ready-made solutions are either imperative or vdom. FRP is extremely rare, and most FRP implementations are bad (either in performance or correctness). So I'm forced to make everything myself anyways, I'm used to it.

deklanw commented 4 years ago

"Isolating" it doesn't change the O(n) cost. And as soon as a parent component changes a child component (whether it be by passing in props, or calling a method, or whatever) then it will trigger the O(n) update for the child component.

For VDOM libraries, yes (modulo manual short circuiting). For Svelte and other non-VDOM reactive frameworks I was under the impression that wasn't the case. Am I wrong?

It is expected that dominator would be faster at change and slower at creation (because it has to setup all of the Signals stuff).

Yes. You said Svelte was trading off performance for conciseness. I was trying to point out that that performance conversation is nuanced. I would like to learn more about how the list reconciler works in VDOM libraries and Svelte. Maybe I'll ask around. But, I'll ask you: what's the technical limitation preventing Svelte from having efficient list and (esp.) nested list reconciliation? Are you saying it is only possible with FRP? What does that have to do with conciseness?

For the complicated app examples you gave: good examples. Thank you. I think poor animation performance with VDOM motivated Rich Harris in particular with Svelte.

Ah, I missed that. Though it doesn't seem to benefit much from being "built-in", it still has a lot of boilerplate:

Yes, perhaps so, but having such a feature built-in (verbosely or not) is still unusual.

There are a lot of languages that can do those things: Rust, Swift, Factor, Ceylon, Haxe, Go, Julia, Nim, various Lisps (Common Lisp in particular), OCaml, etc.

I'm barely educated on these topics and only know a bit about these languages. So, I apologize in advance. But, as far as I'm aware none of them have all of (or even most of) those properties as goals explicitly, officially, by the creator(s) of the language.

Every single Lisp has that property, and homoiconicity is overrated anyways.

I'm not educated enough on the topic but I'll keep this in mind if I ever get into it.

The Rust web community is extremely small and young, it's only really existed for the past year or so. But I started using Rust to make web apps 2 years ago.

This is true but the Rust community is relatively large and Rust is associated with WebAssembly. It would seem the community was inevitable.

I sure would! In addition, 99+% of the ready-made solutions are either imperative or vdom. FRP is extremely rare, and most FRP implementations are bad (either in performance or correctness).

I wonder if reactive-but-not-functional libraries like Svelte and Surplus should be called "imperative". Anyway, I get your point. Agreed @ rarity and bad implementations. I think Turbine can be made fast, the development is just on hold and performance isn't high on the list afaik. The implementation is correct, I believe, though.

May I ask, how did you get into FRP? Why do you choose it as your paradigm?

Thanks for the continued conversation. I'm learning a lot.

Pauan commented 4 years ago

For Svelte and other non-VDOM reactive frameworks I was under the impression that wasn't the case. Am I wrong?

Yes, that is wrong, because Svelte is using an O(n) reconciliation algorithm for lists, the same as vdom.

As I showed in my example before, it will always do a full O(n) update of the list, even if only one element in the list changed. You can look at the JS output tab to see this for yourself.

This is true even if you put the list into some child component somewhere. This is true even if you use keys. This is true even if you only have 1 list.

All dynamic lists in Svelte are handled with the same O(n) update algorithm, no exceptions.

You said Svelte was trading off performance for conciseness. I was trying to point out that that performance conversation is nuanced.

Oh, no doubt there's many interesting trade-offs happening. That's also true with FRP libraries: each FRP library does things a bit differently, and there's many different designs for FRP.

I would like to learn more about how the list reconciler works in VDOM libraries and Svelte.

As I showed earlier, this is the algorithm that Svelte uses to update lists. Here is an example of a vdom list update algorithm.

They are fundamentally doing the same thing: they keep a map of the old keys, and then they loop over the old and new lists, checking to see which ids are in the map, which ones aren't, and which ones have changed positions. Then they update the DOM accordingly.

This requires (at a minimum) an O(max(n, m)) traversal of both lists. In the case of Svelte it's actually doing four loops, so the performance will be worse than O(max(n, m)).

In addition, as they are looping over the list, they also update all of the sub-elements of the list (which is needed in case one of the sub-elements changed). So it's a recursive algorithm, which means that even if only one element in the list changed, it has to recursively update everything.

But, I'll ask you: what's the technical limitation preventing Svelte from having efficient list and (esp.) nested list reconciliation?

It needs to keep track at runtime which element changed. That requires something similar to MutableVec (though in JS, obviously).

I suppose it doesn't necessarily need full on FRP, it just needs the MutableVec to keep track of all dirty changes. Then the reconciliation algorithm can use that to efficiently update only what it needs to. This will bring the performance cost down to O(1).

It will still be a bit clunky to use, since Svelte doesn't have support for mutable methods, so you need to do weird things like foo = foo; in order to trigger the update. But that's how Array works right now, so at least it's not any worse than the status quo.

Are you saying it is only possible with FRP? What does that have to do with conciseness?

Originally I thought that O(1) update would require something like SignalVec, and thus would pay a cost in verbosity.

But after thinking about it, Svelte would only need to keep dirty changes in MutableVec, so it wouldn't pay a verbosity cost.

I think poor animation performance with VDOM motivated Rich Harris in particular with Svelte.

Right, and Svelte is good for animation performance except for lists. So if they can adopt something similar to MutableVec, then they would be on equal footing with dominator for raw performance.

Yes, perhaps so, but having such a feature built-in (verbosely or not) is still unusual.

I certainly agree with that! FRP is quite niche (even as a library), so having a language with built-in support for it is quite rare indeed.

But, as far as I'm aware none of them have all of (or even most of) those properties as goals explicitly, officially, by the creator(s) of the language.

Why does that matter? As long as a language has best-in-class support for it, then that's what matters.

Even if a language has those things as goals, that doesn't guarantee that it will do them well. I'd rather have a language that can do those things well, even if they aren't explicit goals of the language.

(In addition, Rust does actually have those things as official goals, except for GUIs. Cross-platform GUIs do exist for Rust, they just aren't "official")

This is true but the Rust community is relatively large and Rust is associated with WebAssembly. It would seem the community was inevitable.

Most Rust programmers don't use Wasm, and many of them haven't even heard of it. After all, Rust is marketed as a replacement for C++, so most people are writing desktop applications, so they don't need or care about Wasm.

It's only a handful of crazies like me who are deeply invested in Wasm. That will change in the future, but for now the Rust Wasm group is quite small (only a couple dozen people at most, and only ~6 active members, including me).

I wonder if reactive-but-not-functional libraries like Svelte and Surplus should be called "imperative".

I would probably call it "reactive-lite" or something like that. It does indeed seem to be in its own category, not really fitting in with FRP or any of the other standard paradigms.

I think Turbine can be made fast, the development is just on hold and performance isn't high on the list afaik. The implementation is correct, I believe, though.

I took a look at Hareactive, and it seems unnecessarily slow due to every node having a DoubleLinkedList. But it does indeed seem correct in behavior, which is unusual but very good.

May I ask, how did you get into FRP? Why do you choose it as your paradigm?

I actually independently invented FRP several years ago (back in 2013). At the time I hadn't even heard of FRP, and I wasn't a functional programmer, but I was writing a large Chrome Extension and I was having a hard time keeping the DOM in sync with the state: the code was a spaghetti mess.

So I came up with this idea of mutable Cells. Each Cell would have a value, and it could listen for changes to other Cells. So when you update one Cell, everything that depends on it will automatically update.

It was a horrible implementation, it wasn't pure or functional at all, it had no support for lists, it was inefficient, buggy, had fatal design flaws, and didn't handle cleanup correctly. But the idea was so successful that I kept experimenting with it, improving it, trying out different designs, etc.

Over time I figured out much better ways of implementing things, I found (and fixed) fatal design flaws, I figured out how to make it really fast, I made cleanup work correctly, I learned about Elm and other FRP systems, I made it more pure, I integrated it with JS Promises, etc.

My futures-signals library is the result of those several years of experimentation and improvements, but stealing the best ideas from futures so it will fit in properly with Rust and be zero-cost.

deklanw commented 4 years ago

Sorry for the late response. These are long responses. In fact, I imagine half the length of your responses so far is about the length of a good README :P

Btw, I said earlier that Reflex uses a VDOM but that was false. Just wanted to correct that.

This requires (at a minimum) an O(max(n, m)) traversal of both lists. In the case of Svelte it's actually doing four loops, so the performance will be worse than O(max(n, m)).

Yes, I believe the Solid guy Ryan has mentioned Svelte's reconciliation algorithm isn't ideal. Like I said, I'd like to see a nested list benchmark. Rust-dominator, and some of the libraries with more efficient array reconciliation. I'd like to see for myself. Do you think it would be useful?

It will still be a bit clunky to use, since Svelte doesn't have support for mutable methods, so you need to do weird things like foo = foo; in order to trigger the update. But that's how Array works right now, so at least it's not any worse than the status quo.

So, the way your library works is that it tracks all mutations to your MutableVec and then just translates those straightforwardly to the DOM? How does that work for large changes? If I randomize the list, or even just sort it in the opposite direction, how would that work and how efficiently?

Oh, no doubt there's many interesting trade-offs happening. That's also true with FRP libraries: each FRP library does things a bit differently, and there's many different designs for FRP.

Indeed, the world of reactive programming, even just restricted to FRP gets complicated with all the tradeoffs.

I would probably call it "reactive-lite" or something like that. It does indeed seem to be in its own category, not really fitting in with FRP or any of the other standard paradigms.

I don't think "lite" is fair. For one thing, this approach predates FRP. The paper A Survey on Reactive Programming by Bainomugisha et al uses the term "cousins of reactive programming". I think even that is misleading, since it's still reactive programming. I think "cell-based reactive programming" or "atom-based reactive programming" is more apt. Indeed, what you describe here:

So I came up with this idea of mutable Cells. Each Cell would have a value, and it could listen for changes to other Cells. So when you update one Cell, everything that depends on it will automatically update.

is pretty much it (see the relevant section of that paper for all the similar libraries). Also, you disparage your implementation but I think the simplicity of that cell-based approach has advantages.

(In addition, Rust does actually have those things as official goals, except for GUIs. Cross-platform GUIs do exist for Rust, they just aren't "official")

Do you have a source for the goal thing? I wasn't aware of that. I was researching the Rust GUI story recently and none of them seem even close to ready (putting aside the bindings to Qt, etc). That's a tangent but it's disappointing.

Why does that matter? As long as a language has best-in-class support for it, then that's what matters.

Yes, I don't disagree with that but I'm making only a social point. As far as I can tell, languages don't randomly become good for some task without intention. Haskell came out in 1990, and Dart came out in 2011 (Flutter in 2017). Haskell has basically no desktop GUI community. Of course, there's no reason that a renaissance of Haskell GUI programming can't happen tomorrow, but it seems unlikely to me.

Community usage is a useful heuristic because there's a strong correlation between usage and production readiness, it seems to me. Causation goes in both directions: people want to use libraries which are already production ready (like you with Koka), and people's use of a library makes it more production ready through contribution and battletestedness.

It's only a handful of crazies like me who are deeply invested in Wasm. That will change in the future, but for now the Rust Wasm group is quite small (only a couple dozen people at most, and only ~6 active members, including me).

It seems like our intuitions about the size of the community are out of sync. You probably know better. Although, maybe it's a matter of talk vs work. I see a lot of talk about Rust WASM but you presumably aren't seeing people actually work with it.

Pauan commented 4 years ago

Like I said, I'd like to see a nested list benchmark. Rust-dominator, and some of the libraries with more efficient array reconciliation. I'd like to see for myself. Do you think it would be useful?

I agree, it seems like something that would be nice to add to js-framework-benchmark.

So, the way your library works is that it tracks all mutations to your MutableVec and then just translates those straightforwardly to the DOM?

Yes. Here is the code that does that. It's quite straightforward: if it's VecDiff::Insert then insert a DOM node at that index, if it's VecDiff::Push then call appendChild, etc.

How does that work for large changes? If I randomize the list, or even just sort it in the opposite direction, how would that work and how efficiently?

It tracks the actual individual mutations. So it will depend on the algorithm.

I would recommend using something like cycle sort, which does the minimum number of changes to the array (at the expense of worse CPU performance and higher RAM usage).

With cycle sort, sorting will always have at most O(2n) VecDiff::Move operations, and will on average be much less than O(2n).

I don't think "lite" is fair.

"lite" just means light-weight or simple, which Svelte certainly is.

I think "cell-based reactive programming" or "atom-based reactive programming" is more apt.

Although it has first-class FRP values, you're instead encouraged to use plain old variables, which are not first-class values, and they are not cells. Instead they're essentially baked into the language.

Also, you disparage your implementation but I think the simplicity of that cell-based approach has advantages.

I don't think so. It caused a lot of real bugs and issues, which is why I fixed and improved it.

Do you have a source for the goal thing?

To be clear, Rust has all of those things as goals except GUIs (and JIT, which it obviously doesn't need).

However, Rust currently has the following working groups:

So it wouldn't surprise me if sometime in the future Rust gets a GUI working group.

I see a lot of talk about Rust WASM but you presumably aren't seeing people actually work with it.

Yes, there's a lot of hype around Rust Wasm, but that hasn't (yet) translated into a large community. It's early stages, the tooling is only just starting to get good, it was a mess until recently.

I'm not joking when I say that Alex Crichton has singlehandedly done about 90% of the work on Rust + Wasm.

deklanw commented 4 years ago

I agree, it seems like something that would be nice to add to js-framework-benchmark.

I've thought about asking around for opinions on what benchmarks would be more illuminating (to add). Unfortunately I think the benchmark is too cemented. Perhaps a fork which tests rust-dominator vs svelte vs (couple vdom libraries with highly optimized list reconciliation) with nested arrays?

I would recommend using something like cycle sort, which does the minimum number of changes to the array (at the expense of worse CPU performance and higher RAM usage).

So more trade-offs

Although it has first-class FRP values, you're instead encouraged to use plain old variables, which are not first-class values, and they are not cells. Instead they're essentially baked into the language.

I wasn't talking about Svelte here, but S.js, MobX, etc etc.

joonazan commented 4 years ago

I came here from a Google search on "rust frp" after looking at Svelte and realizing that it does diffing when the tutorial recommended adding indices to lists.

After reading this thread I'm sold on dominator! MutableVec seems to be the right thing for implementing an editable tree.

Just curious, would there be some way to do smart updates with FRP even if I decide to represent my tree as a list of nodes and a list of parent indices?

Pauan commented 4 years ago

@joonazan Theoretically that's possible, but I really don't know why you'd do that. That seems very clunky and will have very bad performance.

Instead it's idiomatic to just put your data into a MutableVec and then map it to transform it into DOM nodes:

html!("div", {
    .children_signal_vec(data.map(|x| {
        html!("div", {
            // use x to generate the data
        })
    }))
})

Then whenever your data changes, the DOM will automatically change. No need to put DOM nodes into lists or anything like that.

If you want a complex hierarchy of DOM nodes, that's easy by just nesting MutableVec:

struct Foo {
    children: MutableVec<Bar>,
}

impl Foo {
    fn render(&self) -> Dom {
        html!("div", {
            .children_signal_vec(self.children.signal_vec_cloned().map(|bar| bar.render()))
        })
    }
}

struct Bar {
    children: MutableVec<Qux>,
}

impl Bar {
    fn render(&self) -> Dom {
        html!("div", {
            .children_signal_vec(self.children.signal_vec_cloned().map(|qux| qux.render()))
        })
    }
}

struct Qux {
    some_data: Mutable<i32>,
    some_other_data: Mutable<String>,
}

impl Qux {
    fn render(&self) -> Dom {
        html!("div", {
            .attribute_signal("value", self.some_data.signal().map(|x| x.to_string()))
            .text_signal(self.some_other_data.signal_cloned())
        })
    }
}

This is very fast, since all changes will be O(1), no matter how deep the change is. So even huge apps will update quickly.

Basically, the hierarchy of your Rust structs is the same as the hierarchy of the DOM. Think of a Rust struct as being like a very light-weight React/Angular component.

The structs just contains data, and then you use the render method to render them. And you can put other structs into a MutableVec which you then render using children_signal_vec.

joonazan commented 4 years ago

That's as I thought. But can you store Foos inside Foo? 3.11.2019 6.48 ap. "Pauan" notifications@github.com kirjoitti:

@joonazan https://github.com/joonazan Theoretically that's possible, but I really don't know why you'd do that. That seems very clunky and will have very bad performance.

Instead it's idiomatic to just put your data into a MutableVec and then map it to transform it into DOM nodes:

html!("div", { .children_signal_vec(data.map(|x| { html!("div", { // use x to generate the data }) })) })

Then whenever your data changes, the DOM will automatically change. No need to put DOM nodes into lists or anything like that.

If you want a complex hierarchy of DOM nodes, that's easy by just nesting MutableVec:

struct Foo { children: MutableVec, } impl Foo { fn render(&self) -> Dom { html!("div", { .children_signal_vec(self.children.signal_vec_cloned().map(|bar| bar.render())) }) } }

struct Bar { children: MutableVec, } impl Bar { fn render(&self) -> Dom { html!("div", { .children_signal_vec(self.children.signal_vec_cloned().map(|qux| qux.render())) }) } }

struct Qux { some_data: Mutable, some_other_data: Mutable, } impl Qux { fn render(&self) -> Dom { html!("div", { .attribute_signal("value", self.some_data.signal().map(|x| x.to_string())) .text_signal(self.some_other_data.signal_cloned()) }) } }

This is very fast, since all changes will be O(1), no matter how deep the change is. So even huge apps will update quickly.

Basically, the hierarchy of your Rust structs is the same as the hierarchy of the DOM. Think of a Rust struct as being like a very light-weight React/Angular component.

The structs just contains data, and then you use the render method to render them. And you can put other structs into a MutableVec which you then render using children_signal_vec.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Pauan/rust-dominator/issues/16?email_source=notifications&email_token=AC66YFAH7XXJWZGHQREPBWLQRZJ2PA5CNFSM4IEVBF5KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEC5K4NA#issuecomment-549105204, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC66YFE2K3XY7EXMC3XYOELQRZJ2PANCNFSM4IEVBF5A .

Pauan commented 4 years ago

@joonazan Sure, why wouldn't you be able to?

joonazan commented 4 years ago

I thought you could but your example didn't nest. 3.11.2019 12.12 ip. "Pauan" notifications@github.com kirjoitti:

@joonazan https://github.com/joonazan Sure, why wouldn't you be able to?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Pauan/rust-dominator/issues/16?email_source=notifications&email_token=AC66YFGT3RKDQAJCDH62ZTLQR2PZTA5CNFSM4IEVBF5KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEC5PAGA#issuecomment-549122072, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC66YFAODF7ECNKTXNUBZHTQR2PZTANCNFSM4IEVBF5A .