mitsuba-renderer / drjit

Dr.Jit — A Just-In-Time-Compiler for Differentiable Rendering
BSD 3-Clause "New" or "Revised" License
563 stars 40 forks source link

`dr.detach`: Make sure we return copy of gradient disabled variables #239

Closed rtabbara closed 1 month ago

rtabbara commented 1 month ago

Picked up when running through the tutorials. Suppose we have

opt = mi.ad.SGD(lr=0.25, params=params)
...
y = mi.Float(2.0)
opt['y'] = y  # A copy should be made here
opt['y'] += 1.0

Internally, the Optimizer __setitem__ would perform

...
self.variables[key] = dr.detach(value, True)

So in the case that value is not grad-enabled. we still want a copy of the variable to be stored rather than a reference to the original value.