3dct / open_iA

A tool for the visual analysis and processing of volumetric datasets, with a focus on industrial computed tomography.
GNU General Public License v3.0
40 stars 16 forks source link

3D renderer: Volume not shown if larger than graphics card memory #48

Open codeling opened 4 years ago

codeling commented 4 years ago

When using OpenGL2 backend in VTK, volumes larger than the graphics card memory are not rendered when the GPU renderer is selected; the vtkSmartVolumeMapper does not handle such volumes automatically.

Current Workaround: Use CPU renderer (by switching to "RayCastRenderMode" in renderer settings).

There are other ways to render such volumes, e.g. using partitions or the OSPRay renderer (see this discussion on using vtkSmartVolumeMapper for large volumes, or using vtkMultiBlockVolumeMapper as mentioned in this discussion

The goal of this issue is to experiment with the different options, and find out which is best suited for our purpose. In the end an automatism should exist which chooses the most suitable rendering mode automatically, depending on volume size and available GPU memory.

codeling commented 4 months ago

Update: Using SetPartitions on vtkOpenGLGPUVolumeRayCastMapper is not a good idea, as at least currently it is very slow in our tests, as also described here. From the numbers reported there, vtkMultiBlockVolumeMapper seems like a viable solution.

Another possibility might be to use "partitioned datasets" using vtkPartitionedDataSet as described in this paraview forum post, though at this point it is unclear how these can be used, and whether they are more performant than vtkMultiBlockVolumeMapper.

In general it seems like for both these solutions (vtkMultiBlockVolumeMapper and vtkPartitionedDataSet), the dataset needs to be split up "manually"; this either requires splitting up the volume in memory (leading to a duplication of memory consumption) or to already load separate, pre-split chunks of the full dataset as individual volume datasets; especially the second option would require a larger change to our dataset loading/representation.