Closed Gzhji closed 1 year ago
Hi!
The 'image' you get when you render with a custom AOV integrator is a multi-channel image which has the same channels as the custom variable channels you are trying to integrate in the form which has this shape:
TensorXf(shape=(img_width, img_height, n_channels))
In the example integrator you have:
dd.y:depth
value)nn:sh_normal
value)In your case, the final render and the path integrated image will be equal because both are rendered with pathtracing. If you want to get the normal of the output, you can simply put something like:
image_normal = render_output[::, ::, 5:8]
Hope it helps.
Hi @Gzhji
@Frollo24 is correct, this tensor just stacks all layers on the third dimesion.
If you're looking to export this an .exr
file you can take a look at the snippet in this tutorial/guide: https://github.com/mitsuba-renderer/mitsuba3/issues/849
Basically, you can do something like this:
_ = mi.render(scene)
scene.sensors()[0].film().bitmap().write('output.exr')
Dear community:
I am trying to get surface normal map through AOV integrator and multi-channel image.
I tested the example cbox file and got .png and .exr file successfully.
However, when I print the rendered .exr file: bmp_exr = mi.Bitmap('my_first_render.exr') print(bmp_exr)
It shows: image: TensorXf(shape=(256, 256, 12)) RuntimeError: "my_first_render.exr": read 0 out of 4 bytes
Does anyone know this issue?
Thanks in advance!