daniilidis-group / neural_renderer

A PyTorch port of the Neural 3D Mesh Renderer
Other
1.12k stars 248 forks source link

Need Transparent Texture Mapping #10

Closed czw0078 closed 5 years ago

czw0078 commented 5 years ago

In 3d modeling, we sometimes use png picture as the texture for transparent effect. PNG picture has a transparent alpha channel, and it is the easiest way to handle glass material and animal hair. For example, here is the result of a horse model example rendered in OpenGL.

Look at the horsetail.

(I will also post PyTorch rendering result here later after the merge #9. PyTorch version can not handle PNG texture for now.)

PNG texture is a cheap but often-used trick to add realism to the result, which is very useful. But this may be a very challenging enhancement task.

It could be implemented as a "mask" value, but it is a very complicated issue involving some knowledge like Z-buffer.

czw0078 commented 5 years ago

@nkolot I am trying to implement this and make a pull request. Could you show me some ideas/pointers on how to do it?

nkolot commented 5 years ago

I actually looked into it and it is significantly hard to implement. The basic problem here is that when you renderer transparent triangles, the order in which they appear is important (the z-buffer you mentioned). I can't see a straightforward way to implement this using the current data structures. It wouldn't be hard to do it on the CPU, but on the GPU it will possibly be a nightmare. The problem here is that for each pixel you will have to store all triangles that might contribute in the rendered image, but the data structure used for that, face_index_map can only hold one value. Ideally you would want a list for each pixel but non-regular data structures are difficult to implement in CUDA.

The other issue I was thinking about is how would you define gradients for transparent textures. Let's take for example the case where a triangle is entirely transparent. In such a scenario the renderer would simply ignore it and optimize what is "behind" this in the scene.

What are your thoughts on that?

czw0078 commented 5 years ago

I also feel the same way, the current data structure is hard to change, and neither do I think it is a good idea to touch the low-level CUDA kernel files.

You right on the gradient problem. A workaround is that we make a rule that there is no perfect transparent glass in the world. In other words, we can require the user to set the alpha value at least > 0 no matter how small, therefore, part of gradients goes into the transparent layer in the proportion of alpha. If there is a glass that perfectly transparent, well, we may just remove that glass and do not make any change, we may do not need it at the first.

I had a vague idea that to implement it on high-level (for example, RasterizeFunction). The renderer generates a batch of pictures as a tensor [z, 3, W, H] rather than a single [1, 3, W, H]. Different layers of images (like Z-buffer) will finally be stitched together as one image. The alpha value serves as a mask: output = alpha current_layer_foreground + (1-alpha) all_previous_layers_background. There are still a lot of details, let me think for a while to concrete my idea, it is a tricky situation.

Maybe we should do #6 first.

nkolot commented 5 years ago

I am closing this issue for now because it seems very difficult to implement.

czw0078 commented 5 years ago

Agree