FHof / torchquad

Multidimensional numerical integration on the GPU using PyTorch
https://www.esa.int/gsp/ACT/open_source/torchquad/
GNU General Public License v3.0
0 stars 0 forks source link

Reduce VEGAS memory usage when calculating gradients #39

Closed FHof closed 2 years ago

FHof commented 2 years ago

I moved the abort conditions to a separate function to reduce memory usage during gradient calculation. The memory usage can be visualized over time with the tensorboard profiler. The scope of chi2 and other variables included the whole while loop body, so PyTorch could not free points from the previously deleted five iterations because these variables still referred to the points in the autograph graph. Now the scope of these variables is in a separate function, thus no longer in the whole while body. I also added a conversion to int to a calculation of a new value of self._starting_N.

Changes without whitespace removal or addition: https://github.com/FHof/torchquad/pull/39/files?w=1