Closed tims457 closed 1 year ago
Could you try wrapping your NeuralODEModule
instantiation with torch.no_grad()
like this?
with torch.no_grad():
node = NeuralODEModule()
copy.deepcopy(node)
This will destroy gradient information, but since you want to deepcopy
the module I guess you want to create a new instance with only the same structure and parameters, similarly to using clone()
and detach()
with Tensors [reference]
This appears to be working. Thanks. Can you explain why this is necessary for NeuralODE
but not other Torch-only models such as nn.Sequential(nn.Linear(...
?
Describe the bug
I'm trying to implement a neural ode model with the Ray library, but Ray calls
deepcopy
on its models which is causing an error with custom modules which includeNeuralODE
.Step to Reproduce
Minimal example triggering the error.
Error message
Expected behavior
A copy of the class instance is returned.
Additional context
Section in
ray.rllib.policy.torch_policy_v2.py
causing the error.