I moved the abort conditions to a separate function to reduce memory usage during gradient calculation.
The memory usage can be visualized over time with the tensorboard profiler.
The scope of chi2 and other variables included the whole while loop body,
so PyTorch could not free points from the previously deleted five iterations because these variables still referred to the points in the autograph graph. Now the scope of these variables is in a separate function, thus no longer in the whole while body.
I also added a conversion to int to a calculation of a new value of self._starting_N.
I moved the abort conditions to a separate function to reduce memory usage during gradient calculation. The memory usage can be visualized over time with the tensorboard profiler. The scope of chi2 and other variables included the whole while loop body, so PyTorch could not free points from the previously deleted five iterations because these variables still referred to the points in the autograph graph. Now the scope of these variables is in a separate function, thus no longer in the whole while body. I also added a conversion to int to a calculation of a new value of self._starting_N.
Changes without whitespace removal or addition: https://github.com/FHof/torchquad/pull/39/files?w=1