So! We have this code here that wraps lambdas in such a way that the original function is passed the value of the argument after it is rendered. This lets it act as a filter. In our use case, that means that a given lambda will return (often multiple) unique values for each possible page it can be used on. This interacts poorly with hogan.js; it treats output of lambdas as templates to be compiled and cached. This leads to basically unbounded memory growth in our server processes, which is sad.
This fixes the memory problem by returning a much smaller set of possible values from each lambda. Each call of the lambda will produce a value in the context/opt, reachable with the lambda's name and a numeric indicator of how many times the lambda has been called in the current render invocation. Then the actual return value will be a template string referencing that value.
So for each lambda, the maximum number of times it will produce a new cache value is the maximum number of times that lambda can be reached in a single render call. This is more than 20 times in the case of some of our existing lambdas (gpt encoding, e.g.), but it's still bounded at a perfectly fine low-ish number. Even having a thousand templates cached is better than the tens of thousands we have now.
Somewhat hacky.
So! We have this code here that wraps lambdas in such a way that the original function is passed the value of the argument after it is rendered. This lets it act as a filter. In our use case, that means that a given lambda will return (often multiple) unique values for each possible page it can be used on. This interacts poorly with hogan.js; it treats output of lambdas as templates to be compiled and cached. This leads to basically unbounded memory growth in our server processes, which is sad.
This fixes the memory problem by returning a much smaller set of possible values from each lambda. Each call of the lambda will produce a value in the context/opt, reachable with the lambda's name and a numeric indicator of how many times the lambda has been called in the current render invocation. Then the actual return value will be a template string referencing that value.
So let's say you had this template:
Currently, the output would be:
With this new code, it would be:
So for each lambda, the maximum number of times it will produce a new cache value is the maximum number of times that lambda can be reached in a single render call. This is more than 20 times in the case of some of our existing lambdas (gpt encoding, e.g.), but it's still bounded at a perfectly fine low-ish number. Even having a thousand templates cached is better than the tens of thousands we have now.
cc/ @FabledWeb