DiffEqML / torchdyn

A PyTorch library entirely dedicated to neural differential equations, implicit models and related numerical methods
https://torchdyn.org
Apache License 2.0
1.4k stars 130 forks source link

HybridNeualDE can't output CUDA tensors #73

Closed qpwodlsqp closed 3 years ago

qpwodlsqp commented 3 years ago

Describe the bug

Y = torch.zeros(x.shape[0], *h.shape)
if self.reverse: x_t = x_t.flip(0)
for t, x_t in enumerate(x):
    h, c = self.jump_func(x_t, h, c)
    h = self.flow(h)
    Y[t] = h
Y = self.out(Y)

In the line 23~29 of hybrid.py, the Y remains as CPU tensor after it is initialized even if HybridNeuralDE parameters and input tensor are placed on GPU. This causes an error in the line 29 when a model is placed on GPU. I am currently using torch==1.7.1 and torchdyn==0.2.2.1

Zymrael commented 3 years ago

I believe that should be fixed by modifying line 1 to Y = torch.zeros(x.shape[0], *h.shape).to(x) to force agreement between devices and dtypes. Feel free to PR if that ends up working :)

qpwodlsqp commented 3 years ago

confirmed that modification works and made a pull request