Turns out, fast tpl algo from h-compact branch is not efficient for small number of nodes, cache investment pays off after ~300 nodes, which can be very rare case in real scenarios, and memory-heavy too.
Instead, this algo proposes meta-functions and light start.
1st h call does not create evaluators, but creates fragment via innerHTML + collects evaluable nodes (prop path https://jsperf.com/queryselector-vs-prop-access is the fastest way to address a node, also it disregards replacements). So it puts an archetype node with refs to weak cache, expected to be disposed.
2nd h call considers that the template is used more than once and invests time creating field evaluators, so any subsequent calls clone initial node and run evaluator.
non-primitive first call immediately creates evaluator.
Turns out, fast tpl algo from h-compact branch is not efficient for small number of nodes, cache investment pays off after ~300 nodes, which can be very rare case in real scenarios, and memory-heavy too.
Instead, this algo proposes meta-functions and light start.
h
call does not create evaluators, but creates fragment viainnerHTML
+ collects evaluable nodes (prop path https://jsperf.com/queryselector-vs-prop-access is the fastest way to address a node, also it disregards replacements). So it puts an archetype node with refs to weak cache, expected to be disposed.h
call considers that the template is used more than once and invests time creating field evaluators, so any subsequent calls clone initial node and run evaluator.