neurospin / pylearn-parsimony_history

Sparse and Structured Machine Learning in Python
BSD 3-Clause "New" or "Revised" License
0 stars 1 forks source link

Possible memory leak in the Proximal gradient algorithms. #2

Closed tomlof closed 10 years ago

tomlof commented 11 years ago

The ProximalGradientMethod algorithms take two loss functions, g (smooth) and h (non-smooth), in the constructor. These loss functions may be very large, especially if they are DataDependent, and their references may inadvertently be kept if the loss function is released, but the algorithm is not.

Solution: The ProximalGradientMethod algorithms should take the loss functions as arguments to the run method.