Open bakape opened 5 years ago
Implement text nodes as a special kind of
<span>
, so they stay addressable.
Elaborate please. Do you mean a tag or other identifier in the field?
You can stick an id attribute onto a span, but not a text node. Most virtual DOMs keep a reference to the node in memory, but I plan to do no such thing across the FFI and lookup nodes each time by ID. Though reversing that is up for consideration.
On Wed, 24 Apr 2019, 11:12 チルノ, notifications@github.com wrote:
Implement text nodes as a special kind of , so they stay addressable.
Elaborate please. Do you mean a tag or other identifier in the field?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bakape/brunhild/issues/1#issuecomment-486117446, or mute the thread https://github.com/notifications/unsubscribe-auth/AB347MFUX723KNCXALO5MJLPSAI7JANCNFSM4HHX5SEQ .
Hmm, memory lookup by reference sounds a lot faster, however I'm not considering the cross-FFI penalty. I don't suppose you could give me a short list of pros and cons for looking up nodes each time by ID, vs a memory reference map/list, could you?
References:
ID lookup:
On Wed, 24 Apr 2019, 19:40 チルノ, notifications@github.com wrote:
Hmm, memory lookup by reference sounds a lot faster, however I'm not considering the cross-FFI penalty. I don't suppose you could give me a short list of pros and cons for looking up nodes each time by ID, vs a memory reference map/list, could you?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bakape/brunhild/issues/1#issuecomment-486323611, or mute the thread https://github.com/notifications/unsubscribe-auth/AB347MFKMBFFVJZ7CRUDJJDPSCEQHANCNFSM4HHX5SEQ .
The crux here seems to be the user being able to define IDs, and the FFI calls. Could something like this be of help for massive FFI calls in JS<->WASM, which there shouldn't be a lot of memory sharing to begin with.
Interesting read, but that is completely not applicable here. I consider the core difference to be storage (or not) of JS object references in the WASM memory for faster lookup times or much faster whole subtree patches (this includes the initial page render). IDs is secondary.
On Wed, 24 Apr 2019 at 20:10, チルノ notifications@github.com wrote:
The crux here seems to be the user being able to define IDs, and the FFI calls. Could something like this https://nullprogram.com/blog/2018/05/27/ be of help for massive FFI calls in JS<->WASM, which there shouldn't be a lot of memory sharing to begin with.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bakape/brunhild/issues/1#issuecomment-486337187, or mute the thread https://github.com/notifications/unsubscribe-auth/AB347MA6VR7WR65SMF77DU3PSCIBHANCNFSM4HHX5SEQ .
@Chiiruno
- Store non-class property values and text node strings inside a global reference-counting map.
- The map needs to be accessible both by string ID and by actual string.
- Maybe somehow not allocate a String twice and use references.
Any idea how to implement this map, that both maps strings to IDs and IDs to strings without actually having to allocate the string twice?
With a map, you can't have the value find the key, right? Whereas vice versa you can, right? If the string is the same, you could have a pointer perhaps?
Oh Perhaps have the first X characters of the actual string be the string id, have the key be a pointer to the string, and use the first X characters as a key for the string to be found? Although, you could ditch the map with that.
Yes, but how to do that in Rust with references or something? This will be used in at least 3 separate locations, so a generic type for this would be nice.
On Mon, 29 Apr 2019 at 04:57, チルノ notifications@github.com wrote:
With a map, you can't have the value find the key, right? Whereas vice versa you can, right? If the string is the same, you could have a pointer perhaps?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bakape/brunhild/issues/1#issuecomment-487435028, or mute the thread https://github.com/notifications/unsubscribe-auth/AB347MFISXQXFKSSA7DAQTLPSZIYNANCNFSM4HHX5SEQ .
I've had a bad migraine the past few days, so I've just been catching up on manga and listening to quiet music, um... Maybe arc/box for references/pointers? I'm pretty sure you can make a map with those. You can also use unsafe to work with raw pointers, while safely storing them in the map.
With Rust, you could make a type, and set the impl for each location, while having shared ("generic" non-specific) functions higher up in the type I think, I'll have to reread that.
Trait, I think that might have been it. The type implements trait along with your specific implementation, there may have been a more virtual approach I may be thinking about.
As for copy operations, for the type you'll want to set it to move instead, I forgot how to do that, but it's easily searchable.
Never mind, it's move default, and you can add a copy trait if you desire it. https://doc.rust-lang.org/rust-by-example/trait/derive.html
Maybe it might be better to use a sparse vector with a freelist and 2 maps to point to indices in the vector.
On Mon, 29 Apr 2019 at 05:09, チルノ notifications@github.com wrote:
Never mind, it's move default, and you can add a copy trait if you desire it. https://doc.rust-lang.org/rust-by-example/trait/derive.html
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bakape/brunhild/issues/1#issuecomment-487436049, or mute the thread https://github.com/notifications/unsubscribe-auth/AB347MCT3ABFINS5QFNEE3DPSZKFLANCNFSM4HHX5SEQ .
Is there any way to guarantee that we only push to the end of the vector? I think middle reads are a given here, but we may be able to speed things up by ensuring that we always only push at the end of the vector, to prevent reallocations.
Actually, it might be not worth it to dedup strings that aren't tags or property names. In which case we only have to deal with a 16 byte array and no extra heap allocations for 2 maps. Yeah, let's do that.
On Mon, 29 Apr 2019 at 05:24, チルノ notifications@github.com wrote:
Is there any way to guarantee there is only push and not insert in this vector? (insert only at end, preferrably) I think middle reads are a given here, but we may be able to speed things up by ensuring that we always only push at the end of the vector, to prevent reallocations.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bakape/brunhild/issues/1#issuecomment-487437352, or mute the thread https://github.com/notifications/unsubscribe-auth/AB347MHNSKS7WOUMH6ZVJHTPSZL4RANCNFSM4HHX5SEQ .
As long as the memory usage isn't morbidly high, it should be okay to sacrifice some of it for speed.
It is important that it fits into the CPU cache though, hm...
The question is, if the strings would even be repeating enough to justify the complication in typical usage scenarios.
On Mon, 29 Apr 2019 at 05:39, チルノ notifications@github.com wrote:
As long as the memory usage isn't morbidly high, it should be okay to sacrifice some of it for speed.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bakape/brunhild/issues/1#issuecomment-487438730, or mute the thread https://github.com/notifications/unsubscribe-auth/AB347MHL3DQZVHCSSGRKDX3PSZNVRANCNFSM4HHX5SEQ .
Assuming an English alphabet, yes. Common HTML terms and some other terms may be worth the complication.
You may also be able to compress common non-programming terms in strings down quite a bit with a lookup map, thus saving memory. However, this assumes English.
Whitelist them?
On Mon, 29 Apr 2019 at 05:42, チルノ notifications@github.com wrote:
Assuming an English alphabet, yes. Common HTML terms and some other terms may be worth the complication.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bakape/brunhild/issues/1#issuecomment-487438981, or mute the thread https://github.com/notifications/unsubscribe-auth/AB347MF2PJGXWZLDVWI762DPSZOAFANCNFSM4HHX5SEQ .
Yeah, whitelist certain common terms, and maybe words in strings to be compressed.
Don't know about sub-string compression, not simple exact match tokenization. The compression/decompression overhead might play there too much. This is all in memory and is not a database.
On Mon, 29 Apr 2019 at 05:44, チルノ notifications@github.com wrote:
Yeah, whitelist certain common terms, and maybe words in strings to be compressed.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bakape/brunhild/issues/1#issuecomment-487439160, or mute the thread https://github.com/notifications/unsubscribe-auth/AB347MF4NQGGDV6KYAOG3CTPSZOH7ANCNFSM4HHX5SEQ .
I guess we can't accept a CPU/memory spike on page load to do all of that, since very rarely is someone on the same page for long. The memory usage would be a lot lower assuming they stay on the same page however. Would there be a way for this to smartly detect pages that are usually active for a long time, or perhaps have a programmer set it?
Adaptive algorithms can come later.
On Mon, 29 Apr 2019 at 05:48, チルノ notifications@github.com wrote:
I guess we can't accept a CPU/memory spike on page load to do all of that, since very rarely is someone on the same page for long. The memory usage would be a lot lower assuming they stay on the same page however. Would there be a way for this to smartly detect pages that are usually active for a long time, or perhaps have a programmer set it?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bakape/brunhild/issues/1#issuecomment-487439549, or mute the thread https://github.com/notifications/unsubscribe-auth/AB347MB4AJLAS7GQRDY5XDDPSZOZBANCNFSM4HHX5SEQ .
That's fine, but I think it's best to consider the concept now, so less rewriting later. Of course, avoiding premature optimization.
- Need an append-only data structure that is writable like a string.
Could you look work on this one?
Actually, before that we need to wipe clean and migrate this repo to the newer Rust WASM toolkit.
Could you look work on this one?
Sure, seems pretty well-defined.
@Chiiruno I set up a basic project with wasm_bindgen as the linking layer.
String tokenizer done.
unsafe unsafe unsafe
like grandpa C was intended.
Use as little string allocations as possible and speed up diffs through tokenization:
Hash
trait for it and just return the contents asu64
.Nodes:
<span>
, so they stay addressable.Patching:
DOM events:
Node creation macros:
Details:
<head>
.