The ProximalGradientMethod algorithms take two loss functions, g (smooth) and h (non-smooth), in the constructor. These loss functions may be very large, especially if they are DataDependent, and their references may inadvertently be kept if the loss function is released, but the algorithm is not.
Solution: The ProximalGradientMethod algorithms should take the loss functions as arguments to the run method.
The ProximalGradientMethod algorithms take two loss functions, g (smooth) and h (non-smooth), in the constructor. These loss functions may be very large, especially if they are DataDependent, and their references may inadvertently be kept if the loss function is released, but the algorithm is not.
Solution: The ProximalGradientMethod algorithms should take the loss functions as arguments to the run method.