OHIF / Viewers

OHIF zero-footprint DICOM viewer and oncology specific Lesion Tracker, plus shared extension packages
https://docs.ohif.org/
MIT License
3.1k stars 3.27k forks source link

HTAN spike task: tile images rendering libraries review #2175

Closed Punzo closed 3 years ago

Punzo commented 3 years ago

HTAN requirements from the rendering libraries are: 1) tilesource should to be customizable, i.e.: retrieve frames (load tiles) via DICOMweb WADO-RS. 2) be able to process tile layers for applying color functions: the ideal solution would be a library which allow running custom shaders on each layer. 3) be able to blend tile layers.

While already investigated two libraries OpenLayers and OpenSeaDragon, these render the data with the canvas2d api, which does not allow to customize the webGL code for point (2). Although a nonideal solution can be achive as link, here we report an investigation of other libraries, and their capabilities, checking if they can fully satisfy our requirements listed above.

A) deck.gl (see also paper): .....

B) leaflet: .....

C) mapbox-gl: .....

Punzo commented 3 years ago

(B) Leaflet layers is a simple img https://github.com/Leaflet/Leaflet/blob/master/src/layer/tile/TileLayer.js#L137 or custom layers can renders the data with canvas2d api as well: https://leafletjs.com/reference-1.0.3.html#gridlayer

However, there are plugins:
https://leafletjs.com/plugins.html#tile-load https://leafletjs.com/plugins.html#tileimage-display

for example, it is possible to use a different load tile function: https://github.com/ismyrnow/Leaflet.functionaltilelayer https://leafletjs.com/examples/extending/extending-2-layers.html

and this one combines and applies color functions of up to 6 layers with custom shaders: https://github.com/equinor/leaflet.tilelayer.gloperations

and then copy back the results to the canvas2d api (i.e., the same idea, but on GPU instead of CPU) that we wanted to implement for OpenLayers): https://github.com/equinor/leaflet.tilelayer.gloperations/blob/338507ab827ce83f97d706e27ad39c101c295b25/src/index.ts#L2669

Punzo commented 3 years ago

(C) mapbox.gl does not allow custom tile loader, see: https://github.com/mapbox/mapbox-gl-js/issues/2920 (requirement 1 not satisfied)

however it is possible to define custom layer with custom shaders, see: https://docs.mapbox.com/mapbox-gl-js/example/custom-style-layer/

Punzo commented 3 years ago

(A)

(i) deck.gl:

(ii) viv library (i.e. avivator). The libray offers loaders for Zarr and OME-TIFF data formats and a custom deck.gl Layer which applies color functions and blending for up to 6 layers.

Punzo commented 3 years ago

additional requirements from HTAN call 20/11/2020: 1) the composition should be up to N layers 2) users should be able to give an opacity for each layer and the blending has to take into account the opacity when adding layers. 3) overlay of heatmaps. 4) loaders for Zarr and OME-TIFF will be nice to have, but it is not required.

Additional info/remarks:

(Aii) VIV library/avivator is not a solution for us for the following reasons:

1) number of layers for the multichannel composition limited to 6;

2) no user provided alpha value for each layer;

3) views limited to 2;

4) missing dicom web loaders;

5) we also we want to handle the widgets in the OIHF UI, so in multiple views we can have different combination of layers and color functions (which is not available in avivator).

VIV library is anyway a good example for custom deck.gl layer with the custom shaders (Ai).

Final remarks

maintaing OpenLayers as render in the dicom-microscopy-viewer and exploiting the offscreen solution (see https://github.com/OHIF/Viewers/issues/2159#issuecomment-726792481 but on GPU) for the composition, for us, it's the current prefered solution (although also using custom shaders with deck.gl looks promising).

Punzo commented 3 years ago

we have decided to go for the offscreen solution. Closing this

ilan-gold commented 3 years ago

@Punzo Hi! I am one of the maintainers of Viv. Our docs can be a bit of a hailstorm to get through (apologies for that) and having a viewer/demo that shows the library but isn't actually part of the library as a demo site doesn't help (it masks some functionality/customizability), so sorry about that. I just wanted to clear a few things up so at least future people who might come across this issue can get the information. Maybe we should add a FAQ to our docs?

As a preliminary the Viv API has multiple levels that allow for customization - you can just use the layers and then manage your deck.gl context on your own (like vizarr or vitessce) or you can use the higher level API that Viv offers for implementing viewers (like the exported Picture-in-Picture or Side-by-Side viewers, as I mention below).

1. number of layers for the multichannel composition limited to 6;

This number is arbitrary and can be bumped if you are interested (see https://github.com/hms-dbmi/viv/issues/304 for more info on this). In addition, this limit is per deck.gl layer (i.e multi-channel image, with one layer per image) not across all deck.gl layers (i.e multi-channel images). See here for an example where we have two layers on top of one another with 8 channels total (it could be 12).

2. no user provided alpha value for each layer;

This example highlights this feature if I understand you correctly.

3. views limited to 2;

This is the limitation of Viv's currently implemented viewers but you can implement your own or propose a new one for Viv. For example you could lay out 4 SideBySideViews (instead of 2) using the x and y arguments with the Views API, following Viv's implementation as a guide if you want. I'm happy to answer any questions and be of support - here is a re-implementation of our PIP and SBS viewers in angular (our viewers are React) using our provided View classes, for example. Alternatively, you could do something like QuPath's multi-channel viewer where you are viewing 4 channels at once from the same image, split out. We actually have an issue for this if you are interested.

4. missing dicom web loaders;

Yes - if your loader implements getTile, getRasterSize, and getRaster like Viv's do, though, Viv's API can handle your imagery (tiled/multi-scale or not). Perhaps this is another point of improvement, making a base abstract class that is clear in what is and isn't part of a loader.

5. we also we want to handle the widgets in the OIHF UI, so in multiple views we can have different combination of layers and color functions (which is not available in avivator).

I think you should be able to do this just using Viv's layers and multiple deck.gl views/canvases. Here is an example using multiple layers acorss different deck.gl canvases, although it may take a second to load.

Lastly, because Viv exports our deck.gl layers, you can overlay things like ROI's or polygonal segmentations over your images using deck.gl PolygonLayers while managing your own deck.gl context, similar to vitessce. I hope this is helpful, and am excited to see whatever you all develop - offscreen rendering/ops is something I have been interested in for a while, so I'd love to see where this goes!

pieper commented 3 years ago

Hi @ilan-gold thanks for jumping in! As @Punzo says we are super impressed with Viv and how deck.gl provides so many options.

We don't have a live site yet, but the offscreen work in OHIF is progressing really well in another context (PET/CT volume viewing) so we're thinking/hoping it'll be the easiest path for our microscopy work to since we already have all the dicomweb infrastructure in place. We're implementing the offscreen stuff with vtkjs, so that gives another toybox full of filtering options and utilities to draw on.

Let's stay in touch on these things since there are a bunch of fun things left to be built and it will be great to collaborate.

Punzo commented 3 years ago

Hi @ilan-gold thanks for your feedback and additional information regarding the list in https://github.com/OHIF/Viewers/issues/2175#issuecomment-731278934. I completely agree viv library and deck-gl are a very promising solutions indeed! and I found your work very interesting (there are a lot of similarities with astronomy vis on which I have worked).

I should have made clear in the comment that we had two ideal candidates: the offscreen solution and the viv library. The main reasons why we picked the offscreen solution over viv are: 1) having vtkjs capabilities as Steve mentioned, 2) we want to maintain our OIHF-microscopy viewer/layout/loadDicom functionality and integrating it with the viv library is not straightforward (while the offscreen solution would be easier to integrate). So the decision was purely based on our specific needs. The project will be fully open-source, we will be very glad to share ideas and collaborate.

ilan-gold commented 3 years ago

Ok @Punzo thanks for the background - excited to see what you all make! Do you have any repos I could look at for the astronomy vis you worked on? I like to get inspiration from all corners of the visualization world

Punzo commented 3 years ago

SlicerAstro :smiley: