widgetti / ipyvolume

3d plotting for Python in the Jupyter notebook based on IPython widgets using WebGL
MIT License
1.95k stars 235 forks source link

Help with plotting RGBA volumetric data #220

Open mlubej opened 5 years ago

mlubej commented 5 years ago

Hello!

I work in the Earth Observation (EO) field where we deal with time series of satellite images. I have a dataset of data.shape = (50, 1000, 1000, 4), where the first dimension is the time series, second and third are the width and height of the image, and the last one is the RGBA channel, where A is a binary 0 or 1, depending on the fact if clouds have been detected in the image (cloud mask).

I would like to plot the data cube in RGB, and set the opacity to 0 or some low value for the voxels with clouds. Is something like this possible with ipyvolume?

I noticed that something could be possible with multi-volume, drawing the multivolume 3 times, each time for each color channel, but I had troubles setting the transfer functions properly, so the resulting mixture did not look as I wanted it to...

Is there some example for this that exists? If not, I can provide the data and maybe we can figure something out?

I want it to look something like this:

rsz_1before_hr

This image was done in Wolfram Mathematica, but I would very much like to make it in Python.

Kind regards, Matic

maartenbreddels commented 5 years ago

Hi Matic,

I think it should be possible using multivolume rendering. I would say, for each channel, use as intensity a pre-multiplied color (for red, that means red*alpha), and for each transfer function a linear ramp (for red, going from pure red, alpha=0, to pure red, alpha=1). It will not give you something exactly like this though... let me know if this is good enough, or maybe with this start you can make some progress, otherwise we have to come up with a better solution (true RGBA rendering).

cheers,

Maarten

mlubej commented 5 years ago

Hi Maarten,

thanks for responding so quickly.

I prepared a dataset which you can download here, it's 7 MB.

So this is what I came up with:

import ipyvolume as ipv
import numpy as np
import matplotlib.pyplot as plt

# load data: the data has RGB intensities between 0 and 1
# dimensions: time series, height, width, depth
# 5, 337, 333, 3
data = np.load('data.npy')

# plot a few time frames
fig = plt.figure(figsize=(20, 5))
for i in range(5):
    plt.subplot(1,5,i+1)
    plt.imshow(data[i])
    plt.xticks([])
    plt.yticks([])   
plt.tight_layout()

test

Now, if I try following your instructions and the examples, I do something like this:

# define the transfer functions as r, g, b linear ramps
def get_tf(col = None):
    col_dict = {'r':0, 'g':1, 'b':2}  
    tab = np.zeros((255,4))
    tab[:, col_dict[col]] = np.linspace(0, 1, 255)
    tab[:, -1] = np.ones_like(tab[:, -1])
    tf = ipv.TransferFunction(rgba=tab)
    return tf

# plot the multi-volume
fig = ipv.figure()
ct_vol_r = ipv.volshow(data[...,0], controls=False, tf = get_tf('r'), max_shape=512)
ct_vol_g = ipv.volshow(data[...,1], controls=False, tf = get_tf('g'), max_shape=512)
ct_vol_b = ipv.volshow(data[...,2], controls=False, tf = get_tf('b'), max_shape=512)

ipv.show()

ipyvolume

And I get this dark cube.

Questions:

Thanks a lot and cheers!

mlubej commented 5 years ago

I see that I mixed up the ramps in the transfer function, but playing around didn't seem to help much..

MattWenham commented 5 years ago

Apologies if the following is stating the obvious.

True RGBA rendering can be solved as a ray casting problem. The research team I was working with undertook some research into this for the visualisation of hyperspectral images in the late '90s. There doesn't appear to be a copy of the relevant paper on-line, and its title only gives four hits on Google. The best I can do is this link, it may be a useful paper for a possible approach to implementation.

From discussions I had at the time, I remember that unless you had very low opacity values, it was never possible to see very far 'into' the rendered cube. If you consider the ray passing out from the observation point (eye) into the rendered cuboid, once the cumulative sum of the opacity along that ray equals one, everything deeper than that point is obscured.

maartenbreddels commented 5 years ago

The key thing is to make the lower levels truly transparent, this gives a better result, but not perfect yet:

def get_tf(col = None):
    col_dict = {'r':0, 'g':1, 'b':2}  
    tab = np.zeros((255,4)) * 1.0
    tab[:, col_dict[col]] = 1#np.linspace(0, 1, 255)
    tab[:, -1] = np.ones_like(tab[:, -1]) * 0.05
    tab[0:100,-1] = 0
    tf = ipv.TransferFunction(rgba=tab)
    return tf
for vol, c in zip([ct_vol_r, ct_vol_g, ct_vol_b], 'rgb'):
    vol.tf = get_tf(c)
screenshot 2019-01-22 at 13 32 09

Hope that helps. I think a crucial issue though is that the voxels that lie on top of each other are rendered as if they lie behind each other, so I am not sure if the volume renderer as it is can do what you want.

mlubej commented 5 years ago

Apologies if the following is stating the obvious.

True RGBA rendering can be solved as a ray casting problem. The research team I was working with undertook some research into this for the visualisation of hyperspectral images in the late '90s. There doesn't appear to be a copy of the relevant paper on-line, and its title only gives four hits on Google. The best I can do is this link, it may be a useful paper for a possible approach to implementation.

From discussions I had at the time, I remember that unless you had very low opacity values, it was never possible to see very far 'into' the rendered cube. If you consider the ray passing out from the observation point (eye) into the rendered cuboid, once the cumulative sum of the opacity along that ray equals one, everything deeper than that point is obscured.

This is a good point. It was never meant that depth is important, as you can see in the original provided images, so the inside voxels don't even need to be plotted, which makes things much less intensive. The only complication is the "missing" data with zero transparency, in that case deeper levels are drawn, but they are still the first visible layer.