When I run the optimization layer = th.TheseusLayer(th.LevenbergMarquardt(obj,max_iterations=2500,step_size=0.005)) layer.forward(optimizer_kwargs={"verbose": True, "damping": 0.01,"track_best_solution": True, }), there are sometimes error occur as RuntimeError: There was an error while running the linear optimizer. Original error message: linalg.cholesky: (Batch element 0): The factorization could not be completed because the input is not positive-definite (the leading minor of order 2 is not positive-definite).. Backward pass will not work. To obtain the best solution seen before the error, run with torch.no_grad(). How can skip this gradient backward just like gradient clipping?
❓ Questions and Help
When I run the optimization
layer = th.TheseusLayer(th.LevenbergMarquardt(obj,max_iterations=2500,step_size=0.005)) layer.forward(optimizer_kwargs={"verbose": True, "damping": 0.01,"track_best_solution": True, })
, there are sometimes error occur asRuntimeError: There was an error while running the linear optimizer. Original error message: linalg.cholesky: (Batch element 0): The factorization could not be completed because the input is not positive-definite (the leading minor of order 2 is not positive-definite).. Backward pass will not work. To obtain the best solution seen before the error, run with torch.no_grad()
. How can skip this gradient backward just like gradient clipping?