enthought / mayavi

3D visualization of scientific data in Python
http://docs.enthought.com/mayavi/mayavi/
Other
1.28k stars 282 forks source link

How to volume render a transparent space with mlab.pipleine.volume() and mlab.pipleine.scalar_field() as a data source?? #1285

Closed barikata1984 closed 6 months ago

barikata1984 commented 7 months ago

Hi all,

Can anybody help me create a video describing a volume rendered scalar field which shrinks?

I am simulating a material dynamics problem. At every simulation step, I got a vector which is reshaped into a 3D scalar field array later, and whose max and min values are 1.0 and anything lower, respectively. In my problem, the coordinates which return the min value are ignored in the next simulation step. So, I want to visualize this process as an animation, like a volume rendered space gradually shrinking.

Below is a simplified code. I provide mlab.pipeline.volume() with a mlab.pipeline.scarlar_field object generated from a data array. I thought that if I set 0.0 at the ignored coordinates in a data array, the area corresponding to the coordinates would get transparent, and I could make the video I wanted. But, as the for loop proceeds, the space gets darker from red-ish color and finally becomes black link to the resultant video on my Google Drive.

So, I am wondering

  1. How can I plot the ignored area as a transparent space?
  2. Why is the whole space rendered red OR black? Actually, this happens to my original code as well. My expectation was something more colorful because the data array stores values between 0.0 and 1.0.
from mayavi import mlab
from moviepy import editor
from tqdm import tqdm
import os
import numpy as np

mlab.options.offscreen = True

# Make a logging directory
log_root = os.getcwd()
log_dir = os.path.join(log_root, "test") 
os.makedirs(log_dir, exist_ok=True)
# Initial rendering of mass distr
fig = mlab.figure(bgcolor=(0, 0, 0), size=(960, 960))
resolution = 3
num_coord_ax = 2**resolution
init_scalars = np.ones((num_coord_ax, num_coord_ax, num_coord_ax))
scalar_field = mlab.pipeline.scalar_field(init_scalars)
volume = mlab.pipeline.volume(scalar_field, figure=fig)
mlab.colorbar(orientation="vertical")
# Boolean index (bidx) vector which manage active vxs in the volume rendered space
active_bidx = init_scalars.astype(int).reshape(-1)

# Psudoe simulation loop
#for i in tqdm(range(128), desc="Simulating..."):
for i in range(512):
    # Sample vector as a mimic of the problem of my interest
    new_data = np.random.rand(active_bidx.sum())
    new_data /= new_data.max()  # ~ (num_active_vxs, )
    # Truncate the minimum values
    bidx = new_data == new_data.min()
    new_data[bidx] = 0.0
    # Create a Truncate the minimum values
    new_src = np.zeros_like(init_scalars)  # ~ (num_coord_ax, num_coord_ax, num_coord_ax)
    new_src.reshape(-1)[active_bidx.astype(bool)] = new_data
    # Update the boolean vector 
    active_bidx[active_bidx.astype(bool)] &= ~bidx
    print(f"{active_bidx.sum()=}")  # check the evaluated area is shrinking or not
                                    # actually, the returned value descreases, which 
                                    # implies the area is shrinking

    # Volume render the new data
    volume.mlab_source.scalars = new_src
    mlab.savefig(os.path.join(log_dir, f"{i:04}.png"), figure=fig)

# Set misc variables for composing a video from the .pngs
input_ext = ".png"
output_name = "test.mp4"
fps = 60
codec = "libx264"
# Get a list of image paths
imgs = sorted(os.listdir(log_dir))
images = [img for img in imgs if img.endswith(input_ext)]
image_paths = [os.path.join(log_dir, img) for img in images]
# Ganarate a video and save it on local
video_name = os.path.join(log_dir, output_name)
clip = editor.ImageSequenceClip(image_paths, fps=fps)
clip.write_videofile(video_name, codec=codec, fps=fps)

Thanks in advance!

barikata1984 commented 6 months ago

Okay, I was able to do what I want. Below is an updated code. The keys were ColorTransferFunction (ctf) and OpacityTransferFunction (otf).

LUTManager is instantiated with a colormap supported in Mayavi and is turned into a Numpy array (solution for my question #2). Besides, the alpha value is set at 0.0 when the scalar is 0.0 (solution for my question #1). Those traits are then registered to the volume object with ctf.load_ctfs(). The code looks only to generate a ctf, but an otf is actually generated inside of load_ctfs() (look into the source code).

Unfortunately, a colorbar cannot be put on a figure by just assigning lut_manager to the object argument of mlab.colorbar(). If somebody wants to do that, come up with a good way.

from mayavi import mlab
from mayavi.core.lut_manager import LUTManager 
from tvtk.util import ctf
from moviepy import editor
from tqdm import tqdm
import os
import numpy as np
import numpy as np

mlab.options.offscreen = True

# Make a logging directory
log_root = os.getcwd()
log_dir = os.path.join(log_root, "test") 
os.makedirs(log_dir, exist_ok=True)
# Initial rendering of mass distr
fig = mlab.figure(bgcolor=(1, 1, 1), size=(960, 960))
resolution = 3
num_coord_ax = 2**resolution
init_scalars = np.ones((num_coord_ax, num_coord_ax, num_coord_ax))
scalar_field = mlab.pipeline.scalar_field(init_scalars)
volume = mlab.pipeline.volume(scalar_field, figure=fig)

# Get a mayavi colormap as a numpy array
colormap_name = "blue-red"  # anything on https://docs.enthought.com/mayavi/mayavi/mlab_changing_object_looks.html#adding-color-or-size-variations 
num_colros = 256
lut_mngr = LUTManager(number_of_colors=num_colros, lut_mode=colormap_name)
colormap = lut_mngr.lut.table.to_array().astype(float) / 255  # ~ (num_colors, 4) ∈ (0.0, 1.0)**4
rgbs = colormap[:, :3]  # retrieve rgb channels
print(f"{colormap=}")

# Customize ColorTransferFunction (ctf) and OpacityTransferFunction (otf)
# NOTE 1: otf is to be generated from the "alpha" channel of ctf internally
# NOTE 2: Those functions can be registered separately. If you want to do that refer to the 
# src of load_ctfs at https://github.com/enthought/mayavi/blob/master/tvtk/util/ctf.py#L65
volume_ctf = ctf.save_ctfs(volume._volume_property)
#volume_ctf["range"] = [0.0, 1.0]
volume_ctf["rgb"] = [[i / (len(rgbs) - 1), *rgb] for i, rgb in enumerate(rgbs)]
volume_ctf["alpha"] = [[0.0, 0.0],  # alpha has to be 0.0 at 0.0 == scalar
                       [1.0, 0.5]]  # if 0.3 == scalar alpha ch is set at 0.3,
# Register the new ctf to the volume module
ctf.load_ctfs(volume_ctf, volume._volume_property)
# Notify the volume of the registration of new ctf and otf
volume.update_ctf = True

# Putting a colorbar like below does not work with an error message: This object has no 
# color map. You may have to make a colorbar object or something to be displayed on the 
# figure. But I have not looked into whether there is a way to make such an object.
#mlab.colorbar(object=lut_mngr, orientation="vertical")

# Boolean index (bidx) vector which manage active vxs in the volume rendered space
active_bidx = init_scalars.astype(int).reshape(-1)

# Psudoe simulation loop
for i in tqdm(range(num_coord_ax**3), desc="Simulating..."):
#for i in range(512):
    # Sample vector as a mimic of the problem of my interest
    new_data = np.random.rand(active_bidx.sum())
    new_data /= new_data.max()  # ~ (num_active_vxs, )
    # Truncate the minimum values
    bidx = new_data == new_data.min()
    new_data[bidx] = 0.0
    # Create a Truncate the minimum values
    new_src = np.zeros_like(init_scalars)  # ~ (num_coord_ax, num_coord_ax, num_coord_ax)
    new_src.reshape(-1)[active_bidx.astype(bool)] = new_data
    # Update the boolean vector 
    active_bidx[active_bidx.astype(bool)] &= ~bidx
#    print(f"{active_bidx.sum()=}")  # check the evaluated area is shrinking or not
                                    # actually, the returned value descreases, which 
                                    # implies the area is shrinking

    # Volume render the new data
    volume.mlab_source.scalars = new_src
    mlab.savefig(os.path.join(log_dir, f"{i:04}.png"), figure=fig)

# Set misc variables for composing a video from the .pngs
input_ext = ".png"
output_name = "test.mp4"
fps = 60
codec = "libx264"
# Get a list of image paths
imgs = sorted(os.listdir(log_dir))
images = [img for img in imgs if img.endswith(input_ext)]
image_paths = [os.path.join(log_dir, img) for img in images]
# Ganarate a video and save it on local
video_name = os.path.join(log_dir, output_name)
clip = editor.ImageSequenceClip(image_paths, fps=fps)
clip.write_videofile(video_name, codec=codec, fps=fps)