Closed puzrin closed 4 years ago
Is this really a performance bottleneck?
I think it shouldn't be a real limiting factor because translations are usually searched once when the label (or other) object is created. E.g. lv_label_set_text(label, _("dog"))
. After that when label
is refreshed it already stores the translated string and doesn't need to search again.
Besides, strcmp
usually fails quickly e.g. when the first bytes are not the same.
If performance matters here I'd rather write a faster strcmp
. In the past days, I found that the built-in memcpy
and memset
in STM CUBE IDE was very slow. I could write ~ 5 times faster alternatives. We could do the same with strcmp
to be sure we have compiler independent, fast function.
I think it shouldn't be a real limiting factor because translations are usually searched once when the label (or other) object is created.
Sounds reasonable. Then i think we can close this issue for better times, when someone has app with thousands of phrases. What do you think?
I agree.
I don't know how popular this lib is. But it may be nice to cut tails to avoid urgent request in future.
This function needs optimization - use half cut search for "long" collections (> 4..8 strings).
To make this works, JS backend also has to do this things:
strcmp
use.If anyone could care about c template, i will update js part & tests then.
Notes:
strcmp
algorythm sync between C and JS (for example, "unicode + bytes compare").