Closed dorverbin closed 1 year ago
Weird, I'm getting compiler failures with cuda_ad_rgb at random iterations.
Critical Dr.Jit compiler failure: jit_optix_compile(): optixModuleGetCompilationState() indicates that the compilation did not complete succesfully. The module's compilation state is: 0x2363
Setting a different seed every iteration certainly seems to help convergence but the error creeps up eventually:
image = mi.render(scene, params, spp=1, seed=it)
I suspected "unreachable pixels" but enabling mask_updates in the optimizer did little.
This issue has stumped me for weeks, everything seemed correct. I had just missed that in your scene you were using the path
integrator, which is the issue here.
As you can see in the plugin reference the data
parameter can introduce discontinuities: https://mitsuba.readthedocs.io/en/stable/src/generated/plugins_emitters.html#environment-emitter-envmap. This is because it does some fancy sampling, which was not the case in Mitsuba 2.
Discontinuities need to be handled correctly, in Mitsuba 3 we usually have a *_reparam
variant of AD integrators which are capable of dealing with discontinuities. You can learn a bit more about AD integrators and dicontinuites here and here.
Also you should change the rendering seed
at every iteration, by doing something like img = mi.render(scene, params, seed=it, spp=spp)
.
With theses two changes the image loss doesn't seem to diverge. Please keep this thread updated if this still doesn't fix this issue on your end.
Summary
When optimizing an envmap, the objective starts consistently increasing after a while.
System configuration
System information:
OS: CentOS Linux release 7.9.2009 (Core) CPU: Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz GPU: Tesla V100-SXM2-16GB Python: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] NVidia driver: 515.65.01 CUDA: 9.0.176 LLVM: 0.0.0
Dr.Jit: 0.2.2 Mitsuba: 3.0.2 Is custom build? False Compiled with: GNU 10.2.1 Variants: scalar_rgb scalar_spectral cuda_ad_rgb llvm_ad_rgb
Description
Implementing a Mitsuba 3 version of the Mitsuba 2 envmap optimization tutorial yields very similar results to those from the tutorial for the first 100 iterations (see Video 1 below), but starts diverging soon after that (see Video 2). The objective starts increasing after ~200 iterations. This also happens when geometry is simpler (e.g. a sphere), and when using other optimizers and learning rates, but is improved (yet still happens later in optimization) by using way more samples. Any idea why this is happening, and how to prevent it?
https://user-images.githubusercontent.com/15837806/205184065-670a92a9-dfba-49fb-b4e2-81e745886bdc.mp4
Video 1: a comparison of my reimplementation in Mitsuba 3 (left) with the video from the Mitsuba 2 tutorial (right)
https://user-images.githubusercontent.com/15837806/205184087-334374fe-5396-43c8-8e7f-f72d54a9d67a.mp4
Video 2: the results of optimizing for 700 iterations
Steps to reproduce
Mitsuba 3 reimplementation:
Step 1: download and unzip
bunny.zip
provided in the Mitsuba 2 tutorial:Step 2: run the Mitsuba 3 version of the code from the tutorial: