Open letsfindaway opened 1 year ago
As a user reported locking up the system when loading a heavy document this issue gets more importance for me.
I would prefer a usage based algorithm (LRU) over a time based and indeed only cache pages for the current document. The number of pages kept in cache could be a configuration variable with a default of something between 5 and 10.
I'm not sure whether assigning some "heaviness" value to a scene, e.g. based on the number of items, is useful. Of course we could keep more pages in the cache if they are lightweight. OTOH lightweight pages are fast to be loaded again, so the cache does not have much benefit for them.
The main benefit of the cache is when flipping through pages. So I would instead think about caching not only adjacent pages, but extend background loading to two or three pages around the current one.
BTW: Have we considered the case that background loading was started, but is not finished when we switch to the next page? Are we then loading the page again, instead of waiting for the background loading to finish? I haven't checked, but I would assume this is the case. Then the loading of adjacent pages will add more load to the CPU and slow things down instead of improving them.
Currently the
UBSceneCache
keeps all scenes forever until a document or a page is deleted. When often switching documents and pages, this leads to increasing and never decreasing memory usage, especially for "heavy" pages containing a lot of images, audio or video.It would be a good idea to implement some retention policy, deleting scenes from the cache which have not been used for some time or are not likely to be used soon. The policy should be implemented within the cache and should not need external triggers affecting the cache's API.
Ideas:
shared_ptr::use_count() == 1
before removing an entry.