We ran into a weird issue that seems to only affect the CUDA mode.
The following code:
import drjit as dr
import numpy as np
a = dr.cuda.ad.TensorXf(np.random.rand(16,16))
dr.enable_grad(a)
print(dr.grad_enabled(a))
with dr.suspend_grad():
a.numpy()
print(dr.grad_enabled(a))
prints
True
False
I.e., the gradients of a somehow get disabled. This does only happen in cuda, when using dr.llvm.ad.TensorXf, everything works as expected. Maybe some form of undefined behavior? Or some bug due to the migration of the tensor to CPU when copying to numpy?
We ran into a weird issue that seems to only affect the CUDA mode.
The following code:
prints
I.e., the gradients of
a
somehow get disabled. This does only happen in cuda, when usingdr.llvm.ad.TensorXf
, everything works as expected. Maybe some form of undefined behavior? Or some bug due to the migration of the tensor to CPU when copying to numpy?