[x] update docs (since some of the methods have gained an options 2nd param, briefly explain that computing platformConfig / platform transformed tokens is cached now)
[x] fix perf test which now fails (e.g. bump the threshold to 2000ms for now, although I do want to get it below 1 sec if possible later..)
I've started work on adding performance tests and defining budgets for some SD processes.
So far I've got a simple/basic test with a single token, as well as a set with 9000 tokens of which 6000 are references, with a ref depth of 3 (ref chain).
This is where it starts to slow down and using the Chrome DevTools performance tab I already managed to spot a significant bottleneck in the SD class methods. Many of these are called multiple times redundantly, and no caching happens for the expensive ones (mainly exportPlatform). This started a clean up process of the methods and my proposal is to deprecate (for removal in v5) the getPlatform and exportPlatform methods; they are ambiguous, not pure, and have unintuitive return types. Instead there are two new methods replacing them, more pure and clear about what they do.
I've also added in-memory caching to get rid of the redundant recomputes, with the added ability to turn this caching off for very advanced use cases (e.g. when doing tests with a single SD instance, recomputing token transforms/preprocessors while changing the config/tokens in between)
@dbanksdesign I'm not sure if I should continue my perf hunt in this branch or create new ones, the 2nd perf test currently fails because it takes 1-2 seconds, before my changes this was 5+ seconds so I already sped things up by more than 2x and that's with unthrottled CPU (imagine how much it matters for shitty CPUs, e.g. in CI pipelines which is a majority use case for this library).
The main performance culprit some of the Tokens Studio users are reporting very likely has to do with resolving token references / expanding nested composites so that's what I'd be looking into next. Aggressive memoization of reference resolutions will probably make a huge difference, or making use of JS Map structure for cheaper accessing/searching of tokens (array is expensive for accessing, object is expensive for traverse, Map is way better for both)
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
options
2nd param, briefly explain that computing platformConfig / platform transformed tokens is cached now)I've started work on adding performance tests and defining budgets for some SD processes. So far I've got a simple/basic test with a single token, as well as a set with 9000 tokens of which 6000 are references, with a ref depth of 3 (ref chain). This is where it starts to slow down and using the Chrome DevTools performance tab I already managed to spot a significant bottleneck in the SD class methods. Many of these are called multiple times redundantly, and no caching happens for the expensive ones (mainly
exportPlatform
). This started a clean up process of the methods and my proposal is to deprecate (for removal in v5) thegetPlatform
andexportPlatform
methods; they are ambiguous, not pure, and have unintuitive return types. Instead there are two new methods replacing them, more pure and clear about what they do.I've also added in-memory caching to get rid of the redundant recomputes, with the added ability to turn this caching off for very advanced use cases (e.g. when doing tests with a single SD instance, recomputing token transforms/preprocessors while changing the config/tokens in between)
@dbanksdesign I'm not sure if I should continue my perf hunt in this branch or create new ones, the 2nd perf test currently fails because it takes 1-2 seconds, before my changes this was 5+ seconds so I already sped things up by more than 2x and that's with unthrottled CPU (imagine how much it matters for shitty CPUs, e.g. in CI pipelines which is a majority use case for this library).
The main performance culprit some of the Tokens Studio users are reporting very likely has to do with resolving token references / expanding nested composites so that's what I'd be looking into next. Aggressive memoization of reference resolutions will probably make a huge difference, or making use of JS Map structure for cheaper accessing/searching of tokens (array is expensive for accessing, object is expensive for traverse, Map is way better for both)
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.