Closed tetterl closed 4 years ago
It gives the average. This is necessary for the result to be differentiable. You can use 1 sample per pixel if you don't want this behavior.
Thanks. So basically if I want to get the 'correct' uv maps (or depth) I have to mask all the (0,0) outside of the shape and divide the others by alpha as I do at the bottom of the example. If yes isn't this a differentiable operation?
It depends on which variable you differentiate with. What you do still introduce discontinuities at the object boundaries w.r.t. object movements. Imagine your object move from one pixel to another pixel. The pixel it moves to will suddenly be populated with the object's uv values, creating discontinuities.
Ah I see, thanks for explaining! At the moment I'm only trying to learn the texture (static object/camera). Thus it's helpful to generate the uv coordinates in the pre-processing and then reuse them in the training phase to prevent the rendering cost by only sampling the texture at the uv coordinates.
No problem. Feel free to propose any fix to this. My only suggestion is to keep the original behavior as the default one, and the other behaviors can be turned on through options.
It might be able to provide an option that allows for such outputs (if the relevant inputs don't require a gradient). But since we can't detect that with TensorFlow this seems to be a rather inconsistent interface. I'll close this issue for the moment.
I'm currently playing around with "rendering" the uv channel additional to the other channels. I observe a strange behavior as it can be seen here: https://colab.research.google.com/drive/1znzLKboP8xAf2vzt2vKkIiAOtIAPyx6O (or below).
As far as I understand all ray samples that don't hit an object are assigned the depth 0 and uv coordinates (0, 0). In the following averaging of the samples these zero values can give quite unintuitive results. E.g. a pixel centered on an edge gets the depth 0.5 even if the depth of the edge is 1.0. Is this a bug or desired behavior? My intuition would be that for some of the
pyredner.channels
only samples that intersect with a shape should be used for averaging.Furthermore it seems that the rendering (e.g.
diffuse_reflectance
) is correct with this respect but it doesn't correspond with theuv_coords
returned by the renderer (i.e. if we would use the uv coordinates for sampling from the texture/mipmap we would get a different result.)