Open expenses opened 2 years ago
Hi @expenses
You were almost there, there are just a few steps missing:
1) You need to further initialize the film by calling film.prepare([])
. With this, it knows how many channels it is supposed to hold.
2) You couldn't use film.create_block()
because of the previous point, you should use this rather than building one yourself.
3) The sampler also needs a few more steps to be fully initialized. First we need to specify its "depth" sensor.sampler().set_samples_per_wavefront(1)
and then we also need to properly seed it sampler.seed(0xDEADCAFE, image_res[0] * image_res[1])
4) The image block has a weight per sample, so when you should be doing something like this image_block.put(mi.Vector2f(x, y), [spec.x, spec.y, spec.z, 1])
.
With these changes, I've managed to run your any crashes. However, the output is all black, I think something in your camera or ray direction might be flawed. I'll let you look into that.
Doing something as simple as you are here is definitely more awkward than we'd like. We'll figure out a simpler solution, or at least improve the documentation to make this easier to reproduce.
@njroussel wonderful, thanks!
Yep, that works! I just needed to also change how the output is written:
diff --git a/old.py b/new.py
index 27fafb4..3c36979 100644
--- a/old.py
+++ b/x.py
@@ -24,6 +24,11 @@ x, y = dr.meshgrid(
dr.linspace(mi.Float, -cam_height / 2, cam_height / 2, image_res[1])
)
+dest_x, dest_y = dr.meshgrid(
+ dr.arange(dr.llvm.Float, image_res[0]),
+ dr.arange(dr.llvm.Float, image_res[1])
+)
+
# Ray origin in local coordinates
ray_origin_local = mi.Vector3f(x, y, 0)
@@ -34,10 +39,13 @@ ray = mi.Ray3f(o=ray_origin, d=cam_dir)
sensor = scene.sensors()[0]
film = sensor.film()
+film.prepare([])
+sampler = sensor.sampler()
+sampler.seed(0xDEADCAFE, image_res[0] * image_res[1])
-(spec, mask, aov) = scene.integrator().sample(scene, sensor.sampler(), ray, medium=None, active=True)
-image_block = mi.ImageBlock([256, 256], [0, 0], 3)
-image_block.put(mi.Vector2f(x, y), spec)
+(spec, mask, aov) = scene.integrator().sample(scene, sampler, ray, medium=None, active=True)
+image_block = film.create_block()
+image_block.put((dest_x, dest_y), [spec.x, spec.y, spec.z, 1.0])
film.put_block(image_block)
image = film.develop()
Output:
I'm going to keep this open for a bit because the SEGFAULT still seems bad. I might see if I can debug it later.
Summary
I tried to follow https://mitsuba.readthedocs.io/en/stable/src/rendering/scripting_renderer.html but modify it so that the scene integrator is used instead:
This resulted in a segmentation fault. Backtrace from
gdb
:Note that I'm not using
image_block = film.create_block()
here. This is because the image block that is created has a channel_count of 0 for some reason.I'm probably doing something very wrong in my own code here, but there are unfortunately not any tutorials for using
Integration.sample
. See https://github.com/mitsuba-renderer/mitsuba3/discussions/395.System configuration
System information:
OS: Arch Linux 6.0.9-arch1-1 CPU: 11th Gen Intel i7-1165G7 GPU: Intel TigerLake-LP GT2 [Iris Xe Graphics] Python version: Python 3.10.8 LLVM version: 14.0.6 CUDA version: N/A NVidia driver: N/A
Dr.Jit version: 0.2.2 (?) Mitsuba version: Mitsuba version 3.0.2 (master[f2d2f1f], Linux, 64bit, 8 threads, 8-wide SIMD) Enabled processor features: cuda llvm avx f16c sse4.2 x86_64