Closed Levi-Armstrong closed 7 months ago
You found the cause?
I'll test this tomorrow, but before merging all debug modes should be switched off again.
I think so. The only difference with compare mode enabled was the difference in what was used for Infinity. The old version used std::numeric_limits<double>::infinity()
where the trajopt ifopt used OSQP_INFTY
which is 1e30
so testing the difference now to see if that makes a difference.
I have found another bug with how the constraint merit coefficient is handled in the new version. The best merit is not being updated when the merit coefficient is increase which causes it continue to fail because it compares the previous best to the new best and the new best will never better because than previous best because it is not updated to account for the new constraint merit coeffs. I will have to do some work with comparing this to the previous version to fix correctly.
The numerical IK tests now succeed for me on Jammy/Humble.
(The std::scoped_lock build error is likely a C++ build version issue, as the correct header
This does solve issue #353
The numerical IK tests now succeed for me on Jammy/Humble. (The std::scoped_lock build error is likely a C++ build version issue, as the correct header is included. This feature has been introduced in C++17.)
Yea, I tried increasing the CXX version to 17 but it still fails and not sure why. I will merge this and open another issue for this. Would you have time to take a look since you are running Jammy locally?
The numerical IK tests now succeed for me on Jammy/Humble. (The std::scoped_lock build error is likely a C++ build version issue, as the correct header is included. This feature has been introduced in C++17.)
Yea, I tried increasing the CXX version to 17 but it still fails and not sure why. I will merge this and open another issue for this. Would you have time to take a look since you are running Jammy locally?
I'll look into it later this week.
This fixes the last two know issues with trajopt ifopt implementation of the trust region implementation. I has been know that it does to correctly handle the constraint merit coefficient and this PR fixes one known issue and a new bug found during the investigation.
The first was that the constraint slack variables gradient need to equal the constraint merit coefficient. This add a new setConstraintMeritCoeff to the QPProblem interface and uses it.
The second bug was found that the best merit coefficient in the results data structure needs to updated when the constraint merit coefficient changes.