Open TiemenSch opened 9 months ago
I did some tests on the (de)serialization effort of the data object itself using Criterion and wasmer as the runtime, but for my largest dataset, serialization itself (to either bincode or a JsValue) takes only a millisecond or maybe two. Bincode wins out with 0.9 vs 1.2 milliseconds, but both should not be a (huge) problem.
That would mean that the bottleneck is further up the tree, when the data has to be passed to the web_sys storage API and eventually end up in browser-land.
I'm not that versed in that side of the spectrum, but would you think that drumming up some WebWorker to do the actual writing to storage help? Since WebWorkers can't access localstorage, the solution would have to depend on IndexedDB as well, making it quite a hassle to get right.
Another potential slowdown could be cloning. For methods like reduce_mut
, state is cloned at most once per call. This can be mitigated by with Rc, or Mrc for internal mutability (at the cost of deep change detection).
In some occasions, the storage listener has been slowing down my Yew application due to the sheer size of the serialization effort involved before writing to local/session storage.
Would you have any ideas how to alleviate this or keep it from blocking the main application's thread?
I was hoping to use some form of Async+spawn_local or WebWorker setup to run the serialization effort in parallel, but so far I haven't been able to get it going and do any good.
bincode
. While it would be awesome to lift this burden off of the main thread, the next hurdle would be to ensure that data-races are prevented, such that the "storaged" state eventually ends up as the latest one.As a last resort I could turn away from using storage listeners altogether and only use a manual save button somewhere such that regular interaction with the app remains smooth.