facebookresearch / theseus

A library for differentiable nonlinear optimization
MIT License
1.74k stars 124 forks source link

How to skip the gradient explosion? #590

Closed FanWu-fan closed 1 year ago

FanWu-fan commented 1 year ago

❓ Questions and Help

When I run the optimization layer = th.TheseusLayer(th.LevenbergMarquardt(obj,max_iterations=2500,step_size=0.005)) layer.forward(optimizer_kwargs={"verbose": True, "damping": 0.01,"track_best_solution": True, }), there are sometimes error occur as RuntimeError: There was an error while running the linear optimizer. Original error message: linalg.cholesky: (Batch element 0): The factorization could not be completed because the input is not positive-definite (the leading minor of order 2 is not positive-definite).. Backward pass will not work. To obtain the best solution seen before the error, run with torch.no_grad(). How can skip this gradient backward just like gradient clipping?

luisenp commented 1 year ago

Sorry, I don't understand what you want to achieve exactly. Is your wish to backpropragate through a failed linear optimizer run?