In the case where MetadataService re-renders a previously cached result, it does not discard the invalidated result until it is overwritten by the new one. This means that the Java garbage collector can not reclaim the memory in use by that result for use by the new rendering operation. This means more heap is required than necessary.
The cache entry could be deleted in this situation to reduce peak memory usage.
This isn't a perfect solution, though, as additional requests for the same result happens during the rendering operation, all of those rendering operations will happen in parallel, each using additional memory. It might be worth thinking about serialising this using Future<> somehow.
In the case where
MetadataService
re-renders a previously cached result, it does not discard the invalidated result until it is overwritten by the new one. This means that the Java garbage collector can not reclaim the memory in use by that result for use by the new rendering operation. This means more heap is required than necessary.The cache entry could be deleted in this situation to reduce peak memory usage.
This isn't a perfect solution, though, as additional requests for the same result happens during the rendering operation, all of those rendering operations will happen in parallel, each using additional memory. It might be worth thinking about serialising this using
Future<>
somehow.