defaults to using the lru cache to reduce duplicate assignments.
since this is this is a server-side sdk we expect many more keys in this cache from a variety of subjects. to determine a sensible default to decided to arbitrarily limit it to approximately 10 mb of memory - this seems small enough to not bring a lot of attention to itself.
worked with chatgpt to understand the memory usage of lru-cache:
Fixes: #issue
Motivation and Context
defaults to using the lru cache to reduce duplicate assignments.
since this is this is a server-side sdk we expect many more keys in this cache from a variety of subjects. to determine a sensible default to decided to arbitrarily limit it to approximately 10 mb of memory - this seems small enough to not bring a lot of attention to itself.
worked with chatgpt to understand the memory usage of
lru-cache
:Description
How has this been tested?