Some frameworks and custom applications opt to plop an LRU cache, such as functools.lru_cache, on top of the main routing function.
Investigate whether this could be a good idea to add to Falcon as an optional customization.
We probably don't want to do this by default, as there is probably no one-size-fits-all approach here. For contrived benchmarks exercising just a couple of end points, a cache of size 3 might be optimal, for a real-world application dealing with many objects or other dynamic segments, and running on many parallel processes/instances/containers, even a cache of 1000 might turn out to be a de-optimization.
Another idea by @kgriffs :
I wonder if we can do something clever to exclude certain routes from blowing up the LRU.
Actually, it would be neat if the framework could learn and auto-tune the LRU.
Some frameworks and custom applications opt to plop an LRU cache, such as
functools.lru_cache
, on top of the main routing function.Investigate whether this could be a good idea to add to Falcon as an optional customization.
We probably don't want to do this by default, as there is probably no one-size-fits-all approach here. For contrived benchmarks exercising just a couple of end points, a cache of size 3 might be optimal, for a real-world application dealing with many objects or other dynamic segments, and running on many parallel processes/instances/containers, even a cache of 1000 might turn out to be a de-optimization.
Another idea by @kgriffs :