esa / torchquad

Numerical integration in arbitrary dimensions on the GPU using PyTorch / TF / JAX
https://www.esa.int/gsp/ACT/open_source/torchquad/
GNU General Public License v3.0
189 stars 40 forks source link

torch.tensor containing integers as integration domain returns zero with non-compiled integrator #180

Closed julian-urban closed 1 year ago

julian-urban commented 1 year ago

Hi,

first of all thanks a lot for this great package, I've been using it heavily over the past few weeks and I'm very happy about its performance.

The issue: today I started using the functionality for calculating derivatives with respect to domain boundaries. I had previously employed jit-compiled integrators which require torch tensors as arguments for the integration domains. Compilation currently seems to be incompatible with computing the gradients, so I switched back to the non-compiled integrator, leaving everything else untouched. I then noticed that the integration consistently returns zero unless I pass the integration domain as a list instead of a torch tensor.

Example code to reproduce:

integrator = Simpson()
print(integrator.integrate(
    lambda x: x,
    dim=1,
    N=101,
    integration_domain=[[0,1]],
    backend='torch'))
print(integrator.integrate(
    lambda x: x,
    dim=1,
    N=101,
    integration_domain=torch.tensor([[0,1]]),
    backend='torch'))
jit_integrator = integrator.get_jit_compiled_integrate(
    dim=1,
    N=101,
    backend='torch')
print(jit_integrator(
    lambda x: x,
    torch.tensor([[0,1]])))

This returns the values 0.5, 0., 0.5, but they should of course all be the same. This is unexpected behavior, it should at least throw an error / a warning to inform the user about the incorrect type. Although IMHO it should ideally allow lists, numpy arrays, and torch tensors, for the integration domain argument. It's a minor issue but may lead to confusion (as in my case), so it might be a good idea to catch this somehow.

Cheers

Edit: I found that if I use torch.tensor([[0.,1.]]) instead of torch.tensor([[0,1]]), it does work as expected. So the problem is not the type of the object, but that it contains integers instead of floats. I still believe this should be fixed or a warning issued. I'm not sure why the jit-compiled version is not affected by this bug.

gomezzz commented 1 year ago

Hi @julian-urban ! Thanks for spotting and posting this issue :)

I can reproduce it locally also in the develop branch. Will aim to fix it in the scope of the ongoing release process for 0.4 :v:

Hopefully just converting to float / checking type should fix it.

gomezzz commented 1 year ago

Should be fixed with #181 which will be part of Release 0.4.0 ! :)

Thanks for pointing it out!