For simplicity, the basic architecture will be developed with a focus on 2D; however, 3D prototypes will simultaneously be explored to ensure generalizability beyond 2D. The initial version has been prototyped to include, simplified tiled rendering, basic caching, and a multiscale rendering strategy. However, there are some key limitations in the existing prototype:
[ ] replace numpy-canvas hack
[ ] enable >2 dimensions
[ ] expose renderer via UI
[ ] make ChunkCacheManager allocation fair across multiple layers/scales
[ ] replace threading module usage with Dask executors
[ ] create a visual debugger for multiscale tiled data
The current 2D prototype assumes that it is possible to pre-allocate a
large 2D plane that represents a cross-section of the dataset. This
works in many cases, but will fail in cases with a large 2D
extent. Furthermore, it is not efficient and leads to overallocation
of GPU memory. The 2D memory allocation should be revised to be
determined based upon napari's current display size.
Although naive caching strategies can be easily implemented, it is
important to consider multiscale-aware caching strategies, where
particular scales can be cached to ensure a natural user interaction.
These caching strategies are important in the context of support for multiple layers. Cache resources must be shared evenly across layers, otherwise the users may observe uneven performance when working with multiple layers simultaneously.
Some attention will be invested into considering the possibility of a persistent cache. There are many cases where a user will work with a particular large image dataset across multiple napari sessions. A persistent cache would speed up the user's experience on subsequent napari sessions, after an initial session that loads the data.
While the prototype leveraged the native Python functoools-based cache, a Dask cache (based on cachey) will need to be developed. This will require exposing the cache contents to enable more intelligent methods for cache clearing. The approach will require adapting element prioritization within the cache.
The current prototype uses a naive parallelization strategy based on
the threading module, but Dask has improved support for
parallelization through an executor abstraction.
For simplicity, the basic architecture will be developed with a focus on 2D; however, 3D prototypes will simultaneously be explored to ensure generalizability beyond 2D. The initial version has been prototyped to include, simplified tiled rendering, basic caching, and a multiscale rendering strategy. However, there are some key limitations in the existing prototype:
ChunkCacheManager
allocation fair across multiple layers/scalesthreading
module usage with Dask executorsThese should be implemented on top of https://github.com/kephale/napari-multiscale-rendering-prototype/blob/main/src/napari_multiscale_rendering_prototype/multiscale_prototype_003.py
numpy-canvas hack and >2 dimensions
The current 2D prototype assumes that it is possible to pre-allocate a large 2D plane that represents a cross-section of the dataset. This works in many cases, but will fail in cases with a large 2D extent. Furthermore, it is not efficient and leads to overallocation of GPU memory. The 2D memory allocation should be revised to be determined based upon napari's current display size.
UI
When https://github.com/kephale/napari-multiscale-rendering-prototype/issues/2 is complete the UI effort for this task should be minimal.
cache
Although naive caching strategies can be easily implemented, it is important to consider multiscale-aware caching strategies, where particular scales can be cached to ensure a natural user interaction.
These caching strategies are important in the context of support for multiple layers. Cache resources must be shared evenly across layers, otherwise the users may observe uneven performance when working with multiple layers simultaneously.
Some attention will be invested into considering the possibility of a persistent cache. There are many cases where a user will work with a particular large image dataset across multiple napari sessions. A persistent cache would speed up the user's experience on subsequent napari sessions, after an initial session that loads the data.
While the prototype leveraged the native Python functoools-based cache, a Dask cache (based on cachey) will need to be developed. This will require exposing the cache contents to enable more intelligent methods for cache clearing. The approach will require adapting element prioritization within the cache.
Initial efforts have begun here: https://github.com/kephale/napari-multiscale-rendering-prototype/blob/1109e7693fce9fcca81c521ba11ce556dafb1bc5/src/napari_multiscale_rendering_prototype/utils.py#L42
threading and dask
The current prototype uses a naive parallelization strategy based on the
threading
module, but Dask has improved support for parallelization through an executor abstraction.