Closed cmeeren closed 3 years ago
Yep! I want this now.
Unless I'm mistaken, currently subModelSeq will loop through and update the sub-bindings for every item on every iteration of the update loop. This may have performance implications for large sequences.
Yep, you are correct. We noticed the performance of our application decrease significantly after I added the SubModelSeq
binding from https://github.com/elmish/Elmish.WPF/issues/131#issuecomment-538398038 even though it only contained around 200 sub-models. The CPU analyzer in Visual Studio showed that the binding functions were taking a significant percentage of the time. Upgrading to a version with the SubModelSeq
optimizations from #149 regained about half of the lost performance while removing all sub-models or removing the entire SubModelSeq
binding regained all of the lost performance.
Just like how we have Binding.oneWaySeq
as the special case of Binding.oneWaySeqLazy
with equals = refEq
and map = id
(as of #157), I think we should also use refEq
and id
as the defaults in these new lazy bindings for SubModel[Seq]
.
Quoting https://github.com/elmish/Elmish.WPF/issues/157#issuecomment-543021860
I'm not sure whether this should bump the patch (implementation detail), minor (new feature), or major (breaking change) version. I'm leaning towards the latter, but am open to other opinions.
Again, I am indifferent to which version bump you go with. However, I recommend that we implement these lazy bindings while using refEq
and id
as the defaults and release them in the same version bump since it is the same kind of change. In particular, then you only have to do that version bump once.
I agree with everything. Thanks! 👍
I suggest we wait with merging #158 until this is ready, too. Alternatively, if it makes the job easier for you, we can create a dedicated branch in this repo for these changes, and merge toward that instead, and when it's ready, merge that into master.
I suggest we wait with merging #158 until this is ready, too. Alternatively, if it makes the job easier for you, we can create a dedicated branch in this repo for these changes, and merge toward that instead, and when it's ready, merge that into master.
Yes, I like the idea of having a dedicated branch. I used my new write access to create a branch for that purpose.
I think I have come up with a good way to implement this.
Originally I thought that we would need new subModelLazy
, subModelOptLazy
, subModelWinLazy
, and subModelSeqLazy
, all with map
and equals
parameters, like oneWayLazy
. For oneWay
, it makes sense to be able to memoize potentially expensive computations, but thinking of it, I can't really come up with any good use-case for why one would need this for the sub-model bindings. Hence I think that map
isn't really needed. (I also think the sub-model overloads are complicated enough type-wise as it is, and map
would add yet another type/transformation into the mix. The most complex overloads actually already have a similar transformation in toBindingModel
.)
Therefore, in my mind, the simplest solution is to extend all the existing sub-model bindings with an additional, optional equals
parameter (added as the final parameter for signature-wise backwards compatibility):
subModel
and subModelOpt
it has signature 'subModel -> 'subModel -> bool
and defaults to refEq
. Internally it's wrapped in a function that also handles the option
wrapper.subModelWin
, but it's wrapped in a function that handles the WindowState
wrappersubModelSeq
it has signature #seq<'subModel> -> #seq<'subModel> -> bool
and defaults to refEq
. (I'm not sure if the type can be annotated with #seq<'subModel>
since this is actually short-hand for something like 'a
with the type constraint when 'a :> seq<'subModel>
, so we may need to use the explicit notation, which I hope we can avoid since it's harder to understand in Intellisense. We'll see what works.)What do you think?
I agree with the subModel
and subModelWin
cases.
For subModelSeq
, I would like to see equals
and map
parameters.
I had forgotten about the present issue when I created #157. Even if I had remembered this issue, I may still have created that one since oneWaySeq
is simpler than subModelSeq
.
My application at work has a subModelSeq
binding in which the getSubModels
parameter takes the form
fun m -> m.Foo |> Seq.filter predicate
WIthout a map
parameter, the most efficient implementation of equals
would be a linear-time computation to call refEq
on sequential pairs of elements.
To be fair, adding an optional equals
parameter for subModel
will be a huge improvement and mostly mitigate the lack of map
for subModelSeq
. It its current form, my application's subModelSeq
binding is fully inspected for changes (a quadratic-time computation c.f. #137) every time Elmish calls update
. The performance goes from perfect when there are 0 sub-models to essentially unusable when there are ~300 sub-models. After we add equals
for subModel
and I use it in my application, then the performance will once again be perfect.
Typing out the previous paragraph made me realize that a subModel
binding can be a complicated way to express the parameter map
. After we add equals
for subModel
, then it would be possible to use this hack to implement lazy behavior for subModelSeq
or any other type of binding.
runs away to code ... returns from coding
As a proof of concept, https://github.com/elmish/Elmish.WPF/blob/2f7fc9f155b8af9dc2c735f8b63145f9f441d0d5/src/Samples/SingleCounter/Program.fs#L29
become
"DataContext" |> Binding.subModel((fun m -> m.Count), id, (fun () -> [
"CounterValue" |> Binding.oneWay snd
]))
and
<StackPanel DataContext="{Binding DataContext}">
<TextBlock Text="{Binding CounterValue, StringFormat='Counter value: {0}'}" Width="110" Margin="0,5,10,5" />
</StackPanel>
How about we make this hack a first-class binding? In this world, there would be no oneWayLazy
...just oneWay
. Replacing a oneWayLazy
binding for the form
"name" |> Binding.oneWayLazy(get, equals, map)
would look something like
"name" |> Binding.lazy(get, equals) |> Binding.oneWay(map)
This makes me think about the functor/applicative/monadic-like behaviors of lazy, seq, and opt. Maybe we can express these "effects" better thereby making it easier to create the desired binding by making it possible to compose bindings: like binding combinators.
Ah! I want to think more about this now, but I have to go :'(
[...] How about we make this hack a first-class binding? [...]
Fantastic! I tried a long time ago to come up with a single lazy abstraction, but couldn't do it. If what you suggest will work, and no XAML changes are needed (i.e., no binding to a separate DataContext
on a parent element), then I'm thrilled by this and it should definitely be in Elmish.WPF.
If the binding API can be designed to be more combinatorial in this way, I think it will end up more ergonomic.
Experiment away!
How about we make this hack a first-class binding?
I tried a long time ago to come up with a single lazy abstraction, but couldn't do it.
I think #151 is blocking this work.
The "conceptual call stack" for my hack above using subModel
looks like
ViewModel<,>.UpdateModel
for subModel
updateValue
for subModel
VmBinding<,>
ViewModel<,>.UpdateModel
for oneWay
updateValue
for oneWay
VmBinding<,>
...and no XAML changes are needed (i.e., no binding to a separate
DataContext
on a parent element)...
The reason my hack needs a separate DataContext
binding on a parent element is because that conceptual call stack includes two calls to ViewModel<,>.UpdateModel
; there are two different view models involved. After #151 is complete, there will be a new record type called (say) VmBindingStrategy<...>
with field UpdateValue
and that conceptual call stack would become something like
ViewModel<,>.UpdateModel
for Lazy<subModel>
VmBindingStrategy<...>.UpdateValue
for Lazy<subModel>
VmBindingStrategy<...>.UpdateValue
for subModel
I think #151 is blocking this work.
I don't think that any more. I think I can get the needed recursion by calling a function that takes the VmBinding<,>
. For example, in the case of UpdateModel
, the recursion can be added just to updateValue
.
I expect to make progress on this later today.
I was thinking that the lazy binding would be created in part by calling a method Binding.lazy
, but that somewhat conflicts with lazy
being a keyword in F#.
Would anyone like to suggest a different or better name?
Maybe optimized
?
IMHO optimized
is too vague.
asLazy
lzy
(preferably not; won't show up when typing la
)lazy'
(preferably not)``lazy``
(definitely not)Further suggestions welcome.
I also like asLazy
best among your suggestions. However, after writing https://github.com/elmish/Elmish.WPF/issues/178#issue-517243684, I now better understand why I initially suggested optimized
.
We have two types of "lazy":
OneWayLazy
andSubModelSelectedItem
.I think the behavior of the second one is more like (but not the exactly the same as) the .NET/C# type Lazy<>
and the F# type/keyword lazy
. (The behavior of those types is exactly the same as the call-by-need evaluation strategy).
I feel like we have ample room for naming improvements here. I will keep thinking about this.
I feel like we have ample room for naming improvements here. I will keep thinking about this.
Do that, though the term lazy
is well-known in the MVU world, so I'd prefer a variant of that. (I'm open to other suggestions, though.)
Oh, I just realized something that I had wrong in my head.
OneWayLazy
is "lasy" in two ways.
https://github.com/elmish/Elmish.WPF/blob/60fdb8922a6442c3ac4d600c857537b8e4bc819b/src/Elmish.WPF/ViewModel.fs#L17-L22
Equals
(and Map
) in updateValue
.
https://github.com/elmish/Elmish.WPF/blob/60fdb8922a6442c3ac4d600c857537b8e4bc819b/src/Elmish.WPF/ViewModel.fs#L418-L422Lazy<>
, which is created in initializeBinding
https://github.com/elmish/Elmish.WPF/blob/60fdb8922a6442c3ac4d600c857537b8e4bc819b/src/Elmish.WPF/ViewModel.fs#L218-L225
and updateValue
https://github.com/elmish/Elmish.WPF/blob/60fdb8922a6442c3ac4d600c857537b8e4bc819b/src/Elmish.WPF/ViewModel.fs#L418-L422
and used in tryGetMember
https://github.com/elmish/Elmish.WPF/blob/60fdb8922a6442c3ac4d600c857537b8e4bc819b/src/Elmish.WPF/ViewModel.fs#L663-L664I also think I am wrong as to why "Lazy" appears in the name of OneWayLazy
. Comparing them side by side...
https://github.com/elmish/Elmish.WPF/blob/60fdb8922a6442c3ac4d600c857537b8e4bc819b/src/Elmish.WPF/ViewModel.fs#L13-L22
...I thought that OneWayLazy
had "Lazy" in its name because of the addition of Equals
and Map
. However, I now have the feeling that this is because of the use of Lazy<>
.
@cmeeren, can you clarify this for me? Is it for only one reason or the other that OneWayLazy
is called "Lazy"?
It's called "lazy" because of the corresponding optimization in Elm (and in Elmish). Note that there are of course some differences due to Elmish.WPF using static views, but the primary goal and the end result is similar: Skip doing UI work if the inputs haven't changed.
Thanks for those links. They are very helpful :)
...the primary goal and the end result is similar: Skip doing UI work if the inputs haven't changed.
But what is UI work? Here are three possibilities phrased in terms of Elm.
I associate the phrase "UI work" work the first possibility. Elm avoids DOM computation by doing more virtual DOM computation. Elmish.WPF also strives for the related goal of minimizing change notifications. In the case of SubModelSelectedItem
, it used to be that updateValue
always caused a change notification (because it always returned true
). PR #177 fixed that. I view this as a more accurate diffing algorithm.
The second possibility is mostly about the virtual DOM diffing algorithm. Being able to compute the diff more efficiently is the purpose of Elm's Html.Lazy
. For OneWayLazy
, this corresponds to the Equals
and Map
functions. This is what I originally thought. I view this as a more efficient diffing algorithm.
I don't know if Elm has any mechanism for making the third possibility more efficient. In Elmish.WPF for the case of OneWayLazy
, the use of Lazy<>
is to help avoid unnecessary execution of the user provided functions (Get
and Map
). However, I still don't know under what circumstances this situations would arise.
More specifically: Is it possible for WPF to be given a binding but not read the bound value?
This is related to https://github.com/elmish/Elmish.WPF/pull/176#issuecomment-549292167
...if the UI doesn't need the selected item for some reason, it's not necessary to calculate it.
...which was also quoted in https://github.com/elmish/Elmish.WPF/issues/178#issue-517243684.
I view the third possibility as lacking an analog in Elm or Elmish-react and maybe not even necessary in Elmish.WPF. If WPF always requests a bound value, then removing the use of Lazy<>
would yield a slight increase in performance and memory use. It would also simplify the code within Elmish.WPF (only internal code...there would be no change to the public API), but the current use of Lazy<>
is not a hindrance to the internal code either.
Quoting Elm's HTML.lazy:
Note: When are two values “the same” though? To optimize for performance, we use JavaScript’s
===
operator behind the scenes:
- Structural equality is used for
Int
,Float
,String
,Char
, andBool
.- Reference equality is used for records, lists, custom types, dictionaries, etc.
Structural equality means that
4
is the same as4
no matter how you produced those values. Reference equality means the actual pointer in memory has to be the same. Using reference equality is always cheapO(1)
, even when the data structure has thousands or millions of entries. So this is mostly about making sure that usinglazy
will never slow your code down a bunch by accident. All the checks are super cheap! ... ...in the end it only matters how much you successfully uselazy
.
Then why not always do such reference equality checks? Why make the user put lazy
everywhere in their code? Is it possible to use lazy
in an unsuccessful way?
This is the change discussed in #157 for OneWaySeq[Lazy]
(to have equal = refEq
and map = id
by default) and merged to an internal branch in PR #162.
I think it is a good idea use reference equality checks by default. They cost almost nothing and could save loads of computation.
To me, successfully using lazy
means overriding the default behavior with some other kind of efficient equality check when reference equality would always return false
(or, God forbid, reference equality would always return true
because you are using mutable types). This is somewhat like the difference between lazyView
and lazyViewWith
in Elmish-react.
- User computation used to create the virtual DOM
[...]
I don't know if Elm has any mechanism for making the third possibility more efficient.
I'm not sure exactly what you mean by point 3, but if I understand you correctly, that would just be some variant of plain old memoization implemented and controlled by the user.
In Elmish.WPF for the case of
OneWayLazy
, the use ofLazy<>
is to help avoid unnecessary execution of the user provided functions (Get
andMap
). However, I still don't know under what circumstances this situations would arise.More specifically: Is it possible for WPF to be given a binding but not read the bound value?
You seem to assume that each binding is used exactly once. That's not the case. You can have several XAML bindings to a single VM property (i.e., Elmish.WPF binding). Using lazy
ensures the value is only computed once.
Quoting Elm's HTML.lazy:
[snip rest of comment]
I'm unsure if you want an answer (and if so, what the question is), or if you're simply writing down your reflections. Let me know if you have concrete suggestions for a change to a binding.
- User computation used to create the virtual DOM
I'm not sure exactly what you mean by point 3, but if I understand you correctly, that would just be some variant of plain old memoization implemented and controlled by the user.
I agree that point three by itself is rather unclear. This is partially because I tried to phrase things in terms of Elm, but I don't know what is possible in Elm for this case as it exists in my mind.
In terms of Elmish.WPF, I am specifically referring to the use of Lazy<>
in the bindings. I don't think it has anything to do with the user and what is under their control. For example, if the user wants to (must?) use a SubModelSelectedItem
binding, then they have no choice but to also use Lazy<>
(now that PR #179 is complete). They can't directly see the use of Lazy<>
, but their program is using it.
You seem to assume that each binding is used exactly once. That's not the case. You can have several XAML bindings to a single VM property (i.e., Elmish.WPF binding). Using
lazy
ensures the value is only computed once.
Sure, but computing things more than once is not the issue. Both...
let a = f ()
printf "%A" a
printf "%A" a
...and...
let a = lazy { f () }
printf "%A" a.Value
printf "%A" a.Value
...execute f
exactly once. The only reason that I can see for why Elmish.WPF is using Lazy<>
is because maybe Lazy<>.Value
will never be called by WPF for some binding over the course of its lifetime. But I can't think of any reason why this would happen.
Of course WPF won't request the bound value for a binding it doesn't have, so any binding defined in Elmish.WPF that doesn't appear in any XAML sort of fits this description, but this is a pathological case. I think this case is better described as a bug. Either the user should use this binding in some XAML or the binding should be deleted.
Another consequence of using Lazy<>
is that the execution of the memoized computation happens later in time than it would if Lazy<>
were not being used. However, this difference seems pointless to me. In particular, both cases execute the computation on the UI thread.
Oh, this gives me an idea though. It used to be that Elmish dispatch loop (and especially update
) was (or, at least, could be) executed on a background thread. Then Elmish.WPF users noticed the race condition when typing in a text box. The fix was to put the whole dispatch loop on the UI thread. Certainly that was sufficient. It probably wasn't completely necessary though. Some parts of the dispatch loop (and especially parts of update
) can probably execute on a background thread. Probably not worth it though. I will just keep this idea in the back of my mind (and here!).
I'm unsure if you want an answer (and if so, what the question is), or if you're simply writing down your reflections.
It is an honest question, but it is also fine if it is taken as being rhetorical.
Let me know if you have concrete suggestions for a change to a binding.
Will do.
Actually, don't bother replying my previous comment. I (think I) have an idea that will make it moot. I will create a new issue soon.
I have thought some more about this. It seems we are doing three optimizations:
I think memoization and caching are the most important ones, since laziness is only useful if the UI doesn't fetch the value, which I think should happen very rarely, if at all (but I may be wrong).
In any case, instead of calling it a variant of the reserved keyword lazy
, we can call it memoized
or cached
(I think memoized
is the better of the two, though cached
is shorter).
To be honest though, I would still prefer lazy
since it's a known Elm(ish) concept. I may actually like lazy'
best of all suggestions so far.
Elmish.React has something similar and called it lazyView
. We might call it lazyBinding
. Though since the usage would likely be Binding.lazyBinding
, I don't particularly like that.
I (think I) have an idea that will make it moot. I will create a new issue soon.
For clarity, that issue was #181, and PR #183 resolved it.
- Laziness: During update [or initializaiton], don't eagerly compute the value; only compute it when fetched
I made a minor addition their to your quote.
I would say that memoization is a specific type of caching. In our case, I think the Cached
binding case memoizies tryGetMember
using a cache size of one (so any cache invalidation here in trySetMember
or elsewhere in updateValue
empties the whole cache).
In computing, the meaning of the word lazy is much less precise. One precise thing is that it is the opposite of eager. The opening paragraph of the Non-strict evaluation section of Wikipedia's Evaluation strategy article essentially says that lazy and non-strict are synonyms. Non-strict evaluation is a very broad concept.
Elm's Html.Lazy is a form of lazy (or non-strict) computation via memoization with a cache of size one:
One of the cool things about Elm is the “same input, same output” guarantee for functions. So whenever we run into two “lazy” nodes while diffing virtual nodes, we ask is the function the same? Are the arguments the same? If they are all the same, we know the resulting virtual nodes are the same as well! So we can skip building the virtual nodes entirely! If any of them have changed, we can build the virtual nodes and do a normal diff.
I think memoization and caching are the most important ones, since laziness is only useful if the UI doesn't fetch the value, which I think should happen very rarely, if at all (but I may be wrong).
I do agree that not eagerly computing values returned from tryGetMember
is the least important thing we are considering here.
...instead of calling it a variant of the reserved keyword
lazy
, we can call itmemoized
orcached
(I thinkmemoized
is the better of the two, thoughcached
is shorter).
About naming, I think
memoized
is better than cached
because it is a more precise description of behavior,cached
is better than memoized
because it is shorter,memoed
are equally cached
good because they have the same length (especially in code, which typically uses a monospaced font),memoed
is worse than cached
because it is an abbreviation.lazy
is better than both memoized
and cached
because it is the name that Elm uses for this concept, and
lazy
is worse than memoized
and cached
because it is a reserved keyword.And I think we are still looking for the right name for us.
I wish I had something a little less superficial than naming to contribute, but my first choice would be asLazy
. memoized
is my second choice. Personally, I don't mind that it's a little longer than cached
.
I think I need this issue to be implemented. I'm using subModelSeq
with a TreeView
that has over 6,000 nodes. When the user checks a checkbox in any of the nodes, then the update triggers the bindings for all the nodes, so the checkbox takes roughly 16 seconds to react to the user's click. See below the slow functions called by the binding code (based on the subModelSeq sample). The preorderFlatten
function might be the one taking most of the time.
let dataOfChildren n =
n.Fields |> List.map (fun nn -> nn.Data)
let rec preorderFlatten n =
n :: List.collect preorderFlatten n.Fields
/// Returns all immediate child counters of the specified parent counter ID.
let childrenFieldsOf pid m =
m.DummyRoot
|> preorderFlatten
|> List.find (fun n -> n.Data.Id = pid)
|> dataOfChildren
Can you share a branch with this slow behavior?
What version of Elmish.WPF are you using?
Elmish.WPF version 3.5.5.0 for net461.
Sorry, I cannot share our project, which uses sensitive data. But as soon as I have time, I will create an obfuscated sample. As you can see below, our model is pretty simple, and the only data the user can change is the boolean values.
module Domain
open System
type FieldId = FieldId of Guid
type MessageType = {
ID: int
Name: string
}
type FieldData = { // Leaf data
Id: FieldId
Name: string
Type: string
IsGml: bool
IsCml: bool
IsCmlChangeField: bool
IsCmlEntity: bool
}
type Field = {
Data: FieldData
Fields: Field list
}
with
member this.IsEmpty = this.Data.Type = ""
type Model = {
MsgType: MessageType
DummyRoot: Field
}
with
member this.ParentStruct = this.DummyRoot.Fields.Head
member this.IsEmpty = this.ParentStruct.IsEmpty
Elmish.WPF version 3.5.5.0 for net461.
Oh! Then you definitely have to try version 3.5.6 (the latest release). As it says in the release notes,
The amount of time used to update
OneWaySeq
andSubModelSeq
bindings has been significantly decreased. This includes all cases of aSubModelSeq
binding and all cases of aOneWaySeq
binding for whichequals
returnsfalse
.
Your use case might still be a good example to motivate the present issue. Let's reconsider it after you are using the newest version.
Sorry, I cannot share our project, which uses sensitive data.
No worries. I completely understand that and then the difficulty to produce a MWE from it.
I upgraded to 3.5.6, but there seems to be no improvement.
Interesting.
Instead of sharing your actual project, can you share a branch with this slow behavior that only contains a minimal working example?
I'm using
subModelSeq
with aTreeView
that has over 6,000 nodes. When the user checks a checkbox in any of the nodes, then the update triggers the bindings for all the nodes, so the checkbox takes roughly 16 seconds to react to the user's click. See below the slow functions called by the binding code (based on the subModelSeq sample).
Oh, I did not fully appreciate this part of your original comment. Are you saying that the SubModelSeq
sample with 6000 top-level nodes has the same slow behavior? I can look into that.
I'm using Bindings.SubModelSeq with code based on the SubModelSeq sample. Our tree has over 6,000 nodes total (not top level) in the worst case. I'll get you a minimal working example as soon as I can.
See below the slow functions called by the binding code (based on the subModelSeq sample). The
preorderFlatten
function might be the one taking most of the time.
Oh, yes. Another point from your original comment that I didn't fully appreciate. I agree. This implementation is quadratic. To be clear, the main inefficiency is with the sample code, which I presume you copied into your actual project. I don't think this is a good example to motivate the present issue.
I created a MWE in this branch. It contains 1000 top-level nodes and takes about four seconds to increment a counter.
I will see if I can find a quick fix.
Hum...I pushed a commit that drops the time from four seconds to one when there are 1000 counters. But it still takes longer than I am willing to wait for 6000. I will continue to investigate.
I pushed a commit where I did a minimal amount of work implementing lazy behavior for SubModelSeq
to test the performance of the modified SubModelSeq
sample. It now updates instantly :)
@ScottHutchinson, can you try this change with your private project and let us know what the performance is like now?
FYI: I just tested with the lazy optimization in Elmish.WPF I implemented second without the caching optimization I implemented first in the SubModelSeq
sample. The update time was still instant, which means users don't have to implement that caching behavior in their application.
I created a MWE in this branch.
The changes are in this branch, which is in my fork of Elmish.WPF. Sorry for the confusion. You can add my repo as another remote.
I'm looking at your changes to the subModelSeq sample. Is the new createParentsByChildId
function needed for the performance improvement? I don't understand it yet, so I don't want to port it to my app unless it's useful. EDIT: I'm thinking that all I need is the new itemEquals
parameter for the subModelSeq
binding. I'm going to try to test that in my app today. Thanks
Sorry for the confusion (again!).
I implemented two optimizations, which are independent of each other. I have now pushed them to separate branches to help emphasize that.
SubModelSeq
sample in the branch testing/large_TreeView_slow_optimize_user_code
.SubModelSeq
sample in the branch testing/large_TreeView_slow_optimize_Elmish.WPF
BUT you can ALSO test this optimization on your private application without any changes to your application using the same branch.So in an attempt to be as specific as possible, can you test your private application on the branch
testing/large_TreeView_slow_optimize_Elmish.WPF
?
No worries. It's just me being slow, and my git fu is not so good.
OK. I think optimization 1 is changing the childrenCountersOf
function to not call the preorderFlatten
function, so I will try that first in my app. EDIT: problem is my model does not have the NodesById
field. So maybe I'm better of testing optimization 2 first, and if that fixes it, then I don't need to bother changing my model to test optimization 1.
Then I will figure out how to point my app to your branch to test optimization 2, which I think requires the new itemEquals parameter for the subModelSeq binding.
OK. I think optimization 1 is changing the
childrenCountersOf
function to not call thepreorderFlatten
function, so I will try that first in my app. EDIT: problem is my model does not have theNodesById
field.
I recommended trying optimization 2 first because my testing showed it gave a significantly bigger improvement.
...you can ALSO test this optimization on your private application without any changes to your application using the same branch.
I will figure out how to point my app to your branch to test optimization 2, which I think requires the new
itemEquals
parameter for thesubModelSeq
binding.
Oh, yes. You are correct. You need to supply that argument. I suggest using refEq
as I did in the SubModelSeq
sample (unless you are representing change via mutation, in which case things are much more complicated).
I referenced the Elmish.WPF package built from your branch in my app, and added the two (fun (_, a) (_, b) -> refEq a b),
arguments just like in the sample. But it did not work. The only checkbox that works reliably (i.e., the GUI changes the checked state visibly when clicked) is the root node. All the other checkboxes in the other nodes do not react.
Huh :(
I suppose I could have introduced a bug to cause that, but I don't know what it would be off the top of my head.
I could increase my priority of this present GitHub issue and then implement it (bug free!).
In the meantime, we can make progress on your specific issue when you are able to create a MWE for us to investigate.
My "MWE" is at https://github.com/ScottHutchinson/Elmish.WPF-1/tree/ShowDialog_subModelSeq
This branch includes a new sample project ShowDialog.subModelSeq
, which will show a tree with over 6,000 nodes and lots of checkboxes that work very slowly.
I started to add code to measure the elapsed time for a checkbox binding to complete, but I couldn't figure out how to make it work.
By the way, the window also loads slowly. But I can live with that, since this number of nodes is the worst case, while most of the cases have fewer than 1,000 nodes and load much faster. I'm much more concerned about the responsiveness of the checkboxes.
I thought I did the rebase correctly, but after I pushed the new branch, GitHub says "This branch is 1 commit ahead, 91 commits behind elmish:master." So I probably did something wrong.
I locally rebased your branch on master
without problem. When I further rebased it on my branch testing/large_TreeView_slow_optimize_Elmish.WPF
and passed in values for itemsEquals
based on refEq
, I got the same behavior that you described...the UI is very responsive but checkboxes are not changing their state when checked.
I will continue to investigate.
I figured out the problem.
First, you definitely need the lazy behavior for which this GitHub issue exists. A special case of a tree is a list. If all your nodes where in a list (i.e. if every "real" node in your tree had the "dummy node" as its parent), then you would avoid the buggy behavior related to trees that I discuss below but still need the optimization to significantly decrease the runtime.
Second, my sloppy commit paired with a value for itemEquals
like fun (_, a) (_, b) -> refEq a b
seems to correctly implement this lazy behavior for explicit trees but not for implicit trees. By "implicit tree", I mean a tree in which each node does not contain direct references to its children. When viewing a list as a tree, each node has no children, so every list is also an explicit tree.
Passing in fun (_, a) (_, b) -> refEq a b
for itemEquals
ignores changes in children since the tree is implicit. So for your MWE, you can instantly change the checkboxes of the top-level node(s), but to change a checkbox of any other node, you also have to make some change in every parent of that noded (i.e. in its parent, and its grandparent, etc.).
The SubModelSeq
sample stores an explicit tree in its model but then converts this explicit tree into an implicit tree in the binding, and then gives the resulting implicit tree to Elmish.WPF. One of my first contributions to Elmish.WPF was changing that sample's model to store an explicit tree. Now I am thinking that another good change is to also use the explicit tree in Elmish.WPF.
I will create a new issue for that part.
I created an issue about how to handle the case of trees. It is issue #236.
I thought more about the naming. For now, I am going to use lazyUpdate
. This is helping me remember that this doesn't save any computation when getting the value (which could be called lazyGet
) but instead adds a short-circuit predicate to the update logic.
Unless I'm mistaken, currently
subModelSeq
will loop through and update the sub-bindings for every item on every iteration of the update loop. This may have performance implications for large sequences. If the input sequence has not changed, the bindings should ideally be skipped altogether. This calls for asubModelSeqLazy
binding.The same might be said for
subModel
, though there's only ever one model there, so it's not as critical. But given that the model might have further sub-models (or just a lot of bindings), it should be possible to skipsubModel
, too, if the input hasn't changed.Nobody has requested this, and I don't have any immediate plans on implementing it, but I'm leaving this issue here in case anyone else wants to look at it, or just as a future reminder for myself.