Closed gizatt closed 5 months ago
I've been starting to consider some of these same improvements myself.
I'm curious if you've thought about how to keep the cache from growing without bound? If I'm reading this code correctly, it seems like if I keep calling set_object
over and over again on the same path
with a new object each time, eventually the cache would become unreasonably large and run out of memory, with all of the cached-but-never-reused objects.
There's also the twin side of this -- the code in dispose_recursive()
seems like it would dispose any loaded textures on the first copy of the object we remove from the scene, leaving the other copies of the object without their textures any more?
Closing this as incomplete / abandoned.
At present, even if a geometry or material is sent twice with the same UUID, there's no caching (at the parsing level, anyway -- maybe there's something deeper) of that declared-to-be-duplicated info. This enables UUID-based caching by creating a persistent
ExtensibleObjectLoader
that keeps a cache of UUIDs across all parsing calls (which is more than the vanillaObjectLoader
does). This change can be taken advantage of by clients (e.g. meshcat-python PR #114) to save tons of bandwidth.