tensorflow / graphics

TensorFlow Graphics: Differentiable Graphics Layers for TensorFlow
Apache License 2.0
2.75k stars 361 forks source link

How do you think we can efficiently render Molecular Dynamics with TF-GFX? #29

Closed bionicles closed 5 years ago

bionicles commented 5 years ago

We want to use Tensorflow Graphics for Molecular Dynamics. Already have the learned physics engine working but we're stuck because our current movie maker PyMol is slow and paid, if you use the latest version from Conda, then it puts gross watermark text on your movies. Warren Delano rolls in his grave

How hard is it to draw moving sphere-clouds in TF-GFX, and would it be possible to add a simple tutorial for this?

We just want to JIGGLE ATOMS, load a tensor of XYZ coordinates (shape is similar to an NLP tensor) (batch_size, n_atoms, n_features), and draw 1 sphere for each, with coloring on the atoms. Then we need to move them around and set keyframes and interpolate between those keyframes at 30fps. That would get us a working prototype and I wager TF-GFX is gonna be much much faster rendering than just about anything else if we leverage stuff like tf.vectorized_map and @tf.function...

This would prove that TF-GFX can be useful for medicine (drug design) ... not to mention AR, VR, gaming, everything visual. If a video is just a space-time pixel-tensor, why not use tensorflow?

Advanced usage of TF-GFX for Molecular Dynamics might involve the surface-drawing algorithm, basically you roll a sphere all over each molecule, and the mesh where the sphere meets the molecule is the surface

I was gonna make a draw_sphere function with icosahedral mesh, we'd also need to translate and rotate individual molecules (tensors of spheres) independently to undock, but unfold is harder because we need to select dihedral bonds and set them to random angles.

here's example code for how we do this now. 'cmd' is from PyMol https://pymol.org/pymol-command-ref.html

def undock(chains):
    for chain in chains:
        selection_string = 'chain ' + chain
        translation_vector = [
            random.randrange(MIN_UNDOCK_DISTANCE, MAX_UNDOCK_DISTANCE),
            random.randrange(MIN_UNDOCK_DISTANCE, MAX_UNDOCK_DISTANCE),
            random.randrange(MIN_UNDOCK_DISTANCE, MAX_UNDOCK_DISTANCE)]
        cmd.translate(translation_vector, selection_string)
        cmd.rotate('x', random.randrange(-360, 360), selection_string)
        cmd.rotate('y', random.randrange(-360, 360), selection_string)
        cmd.rotate('z', random.randrange(-360, 360), selection_string)

def unfold(chains):
    for chain in chains:
        np.array([unfold_index(name, index) for name, index in
                  cmd.index('byca (chain {})'.format(chain))])

def unfold_index(name, index):
    selection_string_array = [
        f'first (({name}`{index}) extend 2 and name C)',  # prev C
        f'first (({name}`{index}) extend 1 and name N)',  # this N
        f'({name}`{index})',                              # this CA
        f'last (({name}`{index}) extend 1 and name C)',   # this C
        f'last (({name}`{index}) extend 2 and name N)']   # next N
    try:
        cmd.set_dihedral(selection_string_array[0],
                         selection_string_array[1],
                         selection_string_array[2],
                         selection_string_array[3], random.randint(0, 360))
        cmd.set_dihedral(selection_string_array[1],
                         selection_string_array[2],
                         selection_string_array[3],
                         selection_string_array[4], random.randint(0, 360))
    except Exception as e:
        print('failed to set dihedral at ', name, index)
        print(e)

I guess it's a big ask, but... How do you think we can use Tensorflow Graphics to efficiently render Molecular Dynamics?

kamelbelkadhi commented 5 years ago

As @bionicles said, we want Tensorflow Graphics to support 3D object rendering on GPUs, so we don't have to wait for many minutes to get 9 seconds of Jiggling Atoms movie.

It's really very slow atoms rendering operation on CPUs, especially for a large number of atoms and when we visualize surfaces (A fancy graphics mode)

While rendering this is how the CPUs state looks like cpu

GPU is completely unused gpu

Some videos of the Rendering process and how much it takes time, this video were generated on a rig using Pymol (a molecular visualization system ). Some of the rig characteristic: Pymol: 2.1.1 Ubuntu: 16.04 Python: 3.6.3 Storage Component: SSD Processor: AMD Ryzen 7 1700 RAM: 2x 8GB DDR4 2400 Mhz GPU: 2x GTX 1060 6GB

Some videos of the docking/undocking/unfold process. 4LGP 4lgp-spheres 4lgp-surface

4KRM

4krm-spheres 4krm-surface

julienvalentin commented 5 years ago

Hi both!

Very very cool stuff!

@bionicles 'How hard is it to draw moving sphere-clouds in TF-GFX, and would it be possible to add a simple tutorial for this?': There is an example that shows how to render + shade a single sphere: https://colab.sandbox.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/reflectance.ipynb

As you know, occlusions need to be handled when you are dealing multiple spheres. I am unfortunately not sure what is the fastest way to handle this and hence encourage you to have a look around; I'll be more than happy to discuss further about the technical solutions you have identified.

Best.

bionicles commented 5 years ago

@julienvalentin Thank you for the link

if we could interpolate between pairs of keyframes, then we might draw multiple sections of the movie simultaneously, and thus generate the movie much faster, this is parallelizable because each frame only depends on the prior and next keyframe

how hard do you think it would be implement some parallel 3D-rendering like this?

it feels like this could greatly accelerate rendering pixar-style graphics because we could directly plug the tensors from an AI agent into a thing that makes movies without numpy or matplotlib conversions, and also we could generate the beginning and end of movies simultaneously given keyframes and scene lengths

cc @agarwal-ashish re: vectorized_map + graphics

import tensorflow_graphics as gfx
import tensorflow as tf

lambertian = gfx.rendering.reflectance.lambertian
phong = gfx.rendering.reflectance.phong

def make_movie(keyframes, length, save_path):
    frames_per_keyframe = length / tf.shape(keyframes)[0]
    frames = interpolate(keyframes, frames_per_keyframe)
    movie = tf.vectorized_map(draw_frame, frames)
    gfx.tensor2movie(movie, save_path)

# generate frames from keyframes
def interpolate(keyframes, frames_per_keyframe):
    return tf.unnest(tf.vectorized_map(
        lambda i: interpolate_pair(keyframes, i, frames_per_keyframe),
        tf.range(len(keyframes))))

def interpolate_pair(keyframes, n, frames_per_keyframe):
    key_1, key_2 = keyframes[n - 1], keyframes[n]
    d_v_d_i = (key_1 - key_2) / frames_per_keyframe
    return tf.vectorized_map(
        lambda d_i: translate_objects(key_1, d_v_d_i * d_i),
        tf.range(frames_per_keyframe))

def translate_objects(xyz, v):
    return tf.vectorized_map(
        lambda i: xyz[i] + v[i],
        tf.range(tf.shape(xyz)[0]))

# draw the frame and project it onto a camera plane
def draw_frame(xyz):
    atoms = tf.vectorized_map(draw_atom, xyz)
    camera = gfx.zoom_to_fit()
    return gfx.project(atoms, camera)

def draw_atom(xyz):
    return gfx.geometry.sphere(xyz, radiance=[lambertian, phong])

we might include the camera positioning in the interpolation part and smoothly transition camera positions across scenes;

also, the transition lengths between keyframes could easily be different just by passing a tensor of transition lengths instead of a scalar. and object motion could be nonlinear by passing function for each object which returns the new pose given number of frames passed

finally, "scene" might not be a pair of keyframes but instead be a list of many, in which case the terminology would change. also we would presumably want to create and delete objects ("ragged keyframes"), which is not shown in this pseudocode

julienvalentin commented 5 years ago

'if we could interpolate between pairs of keyframes, then we might draw multiple sections of the movie simultaneously, and thus generate the movie much faster, this is parallelizable because each frame only depends on the prior and next keyframe

how hard do you think it would be implement some parallel 3D-rendering like this?'

I am not sure how easy it would be to precisely interpolate over a function that has so many discontinuities. Furthermore, discontinuities that should occur between keyframes are hard to predict only from keyframes. If your problem is constrained enough, I would suggest that you train a network to perform the interpolation.

Best.

julienvalentin commented 5 years ago

Closing since inactive for a while; feel free to re-open if needed.

dynamicwebpaige commented 4 years ago

Pinging this thread - were you able to use TF-GFX to visualize, @kamelbelkadhi and @bionicles?