Open brabbitdousha opened 1 month ago
Hello @brabbitdousha,
The goal of mi.Loop()
it to create a loop inside of the kernel being recorded. By definition, the body of that loop must be part of that same kernel.
Calling functions from another framework, which cannot be traced by DrJit and therefore cannot be included in the body of the loop, cannot be supported.
As you have noticed, disabling loop recording (dr.JitFlag.LoopRecord
) makes it possible to call into the other framework inside of the loop body, with the effect of breaking up the megakernel and incurring a lot of overhead (e.g. to read / write the results of each kernel from / to global memory).
Summary
Hi, I am using pytorch with misuba3, and I need to do pytorch network inference in rendering, here is a pseudo code of my working flow: I am calling trainer.eval() (which has a pytorch network inside) in a mi.Loop, for example if I want to do neural importance sampling, I need to insert a network in rendering, and after the rendering is over, I will update the network using trainer.train()
However, after updating the network in trainer.train(), the output of trainer.eval() in mi.Loop doesn't update.... I tried with ```#dr.set_flag(dr.JitFlag.VCallRecord, False)
dr.set_flag(dr.JitFlag.LoopRecord, False)```, everything becomes right, but this is much slower,
so with these two flags enabled, using pytorch network in mi.Loop is not allowed? I only do network inference in mi.Loop, and I am not using differentiable rendering.
System configuration
System information:
OS: windows CPU: intel i9-13900H GPU: RTX 4060 laptop Python version: 3.9 CUDA version: 12.0 NVidia driver: 550.54.14
Dr.Jit version: 0.4.4 Mitsuba version: 3.5.0