Closed julian-urban closed 1 year ago
Hi @julian-urban ! Thanks for spotting and posting this issue :)
I can reproduce it locally also in the develop branch. Will aim to fix it in the scope of the ongoing release process for 0.4 :v:
Hopefully just converting to float / checking type should fix it.
Should be fixed with #181 which will be part of Release 0.4.0 ! :)
Thanks for pointing it out!
Hi,
first of all thanks a lot for this great package, I've been using it heavily over the past few weeks and I'm very happy about its performance.
The issue: today I started using the functionality for calculating derivatives with respect to domain boundaries. I had previously employed jit-compiled integrators which require torch tensors as arguments for the integration domains. Compilation currently seems to be incompatible with computing the gradients, so I switched back to the non-compiled integrator, leaving everything else untouched. I then noticed that the integration consistently returns zero unless I pass the integration domain as a list instead of a torch tensor.
Example code to reproduce:
This returns the values 0.5, 0., 0.5, but they should of course all be the same. This is unexpected behavior, it should at least throw an error / a warning to inform the user about the incorrect type. Although IMHO it should ideally allow lists, numpy arrays, and torch tensors, for the integration domain argument. It's a minor issue but may lead to confusion (as in my case), so it might be a good idea to catch this somehow.
Cheers
Edit: I found that if I use
torch.tensor([[0.,1.]])
instead oftorch.tensor([[0,1]])
, it does work as expected. So the problem is not the type of the object, but that it contains integers instead of floats. I still believe this should be fixed or a warning issued. I'm not sure why the jit-compiled version is not affected by this bug.