hungpham2511 / toppra

robotic motion planning library
https://hungpham2511.github.io/toppra/index.html
MIT License
625 stars 170 forks source link

Another corner case #17

Closed pbeeson closed 4 years ago

pbeeson commented 6 years ago

Hung,

I ran into a case today where The following two waypoints for a 6-DOF arm were sent to toppra, and it failed because the controllable sets saw a nan in K.

Reproduce by updating scalar_example.py:

    waypts = [[-9.089468271438139e-07, -0.46400441351211447, -0.5760014655483718, -3.9375206752326924e-07, -1.6999970211081608, 5.429519493702008e-06], [0.0, -0.464, -0.576, 0.0, -1.7, 0.0]]
    path = ta.SplineInterpolator(np.linspace(0, 1, len(waypts)), waypts)
    vlim = np.array([[-3, 3],[-3, 3],[-3, 3],[-3, 3],[-3, 3],[-3, 3]])
    alim = np.array([[-4, 4],[-4, 4],[-4, 4],[-4, 4],[-4, 4],[-4, 4] ])

However, I noticed that if I comment out the np.isnan(K[i]).any() if in both compute_controllable_sets() and compute_parameterization, that the rest of computer_paramterization() seems to work just fine and produce the correct answer.

So might you be able to either ignore nan for K or at least set nans in K to constants.MAXX?

Both of those "solutions" work for me in testing.

hungpham2511 commented 6 years ago

Thanks for the report. Most likely it is a numerical issue. I will investigate this problem and let you know.

Your workarounds can be dangerous though since it can return bad solutions for infeasible cases.

hungpham2511 commented 6 years ago

Hm, I can run scalar_example.py with your setting without any problem. Are you sure you are using the latest commit, and have re-build the Cython source file again?

pbeeson commented 6 years ago

Ok, the following local change in constants is causing the problem. I believe I needed this early on for really small values. Let me back out that change in constants with the latest and run through my growing collection of test cases.

diff --git a/toppra/constants.py b/toppra/constants.py
index 97da017..85a6165 100644
--- a/toppra/constants.py
+++ b/toppra/constants.py
@@ -4,6 +4,6 @@ TINY = 1e-8
 SMALL = 1e-5
 LARGE = 1000.0
 INFTY = 1e8
-MAXU = 10000  # Max limit for `u`
-MAXX = 10000  # Max limit for `x`
-MAXSD = 100   # square root of maxx
+MAXU = 1000000  # Max limit for `u`
+MAXX = 1000000  # Max limit for `x`
+MAXSD = 1000   # square root of maxx
pbeeson commented 6 years ago

Oh yeah, the reason why I chose 1000 for MAXSD (thus changing MAXX and MAXU) was because for an N number of waypoint trajectory with 0 actual motion, I wanted each waypoint to have only 1ms time separation instead of 10ms (because I know that my user detection 1ms intervals and treats them specially). But apparently, this 1000/1000000 change is not always working with the other constants.

hungpham2511 commented 6 years ago

I see. In that case, I recommend using the same constants. I understand that those constants can be quite arbitrary, but they are necessary to ensure numerical stability. I also learn (the hard way) that implementing a numerical solver is no easy task and should only be done by specialists.

A really easy way to reduce this "lower bound" is to use shorter path. Maybe you can try this

    waypts = [[-9.089468271438139e-07, -0.46400441351211447, -0.5760014655483718, -3.9375206752326924e-07, -1.6999970211081608, 5.429519493702008e-06],
              [0.0, -0.464, -0.576, 0.0, -1.7, 0.0]]
    path = ta.SplineInterpolator(np.linspace(0, 1e-3, len(waypts)), waypts)
    vlim = np.array([[-3, 3],[-3, 3],[-3, 3],[-3, 3],[-3, 3],[-3, 3]])
    alim = np.array([[-4, 4],[-4, 4],[-4, 4],[-4, 4],[-4, 4],[-4, 4]])
    pc_vel = constraint.JointVelocityConstraint(vlim)
    pc_acc = constraint.JointAccelerationConstraint(
        alim, discretization_scheme=constraint.DiscretizationType.Interpolation)

    instance = algo.TOPPRA([pc_vel, pc_acc], path, solver_wrapper='seidel')
    jnt_traj, aux_traj = instance.compute_trajectory(0, 0)

I think a solution to this annoying problem is to scale the path length (the 1e-3 value) internally. For instance, scale if there are exceedingly high path velocity.

Another solution is to reimplement the seidel solver in a more numerically stable way, or to use another solver altogether. However, from my experience, seidel is almost always a few times faster than other solvers.

pbeeson commented 6 years ago

Ok, I'll do that. Thanks for the tips.

pbeeson commented 6 years ago

FYI, I found when using your default constants and a spline with intervals 0..0.001 (as you suggest above), that for any 0 motion trajectory of any DOF with number of waypoints >= 3, the duration of the trajectory is always 0.00063245 seconds. Thus, this informs me that for a 0 motion trajectory of size 2, the catch all delta_t (when velocity is too tiny) in algorithm.py should be 0.00063245 instead of 5 (I had it 0.001 before). This of course assumes 0.001 for the interval, but that tip seems to be working so far. Since 0.00063245 < 0.001, then my user's application that uses this rule to catch non-motion points to avoid issues on their industrial manipulator, should still work.

pbeeson commented 6 years ago

Wait, so when I use 0.001 for the spline, I get wildly different trajectory durations out of toppra. Trajectories are running on the order of 10x faster now, which just seems wrong. So this "fix" is really only applicable to very small motions.

pbeeson commented 6 years ago

So, my current solution will be to go back to your constants, go back 1.0 as the top spline/gridpoints interval, and just inform my users to use 0.02+epsilon as a "0 motion" delta.

hungpham2511 commented 6 years ago

Clearly this should not happen. Seem like there is a bug hiding somewhere. I will investigate this issue.

pbeeson commented 6 years ago

The real issue of having 0.02 as the minimum time in between any two waypoints is that for straight-line motion, you do generally have lots of nearby waypoints to enforce that the hand/end effector stays on a straight line. By making this 0.02 instead of 0.001, there is now a noticeable slowdown in my straight-line motion trajectories. But of course, everything is stable.

hungpham2511 commented 6 years ago

Ok, so clearly if we scale the path duration down by, say 10 times, the average path velocity squared would decrease by 100 times. Now, if this value is smaller than 1e-5, we have trouble because of numerical precision issues.

In any case, it seems like in order to address all these issues, a proper scaling procedure is needed.

pbeeson commented 6 years ago

Honestly, the best thing so far that I've found is what I originally proposed above. 1) Go with my constants I posted above with MAXSD at 1000, which yields 0.001ms between each waypoint for 0 motion trajectory with >= 3 waypoints (also requires MAXSD to be increased in _CythonUtils and MAXX and MAXU to be scaled). Then set the minimum delta_t in algorithm.py to 0.001 for 2 waypoint trajectories. Of course, then I'm back to seeing the original problem for this thread, but again, if I simply ignore the fact that there are nans in the K values computed in parameterization (and controllable sets calculations), the system works. The nans eventually ground out into defaulting to the 0.001 ms interval due to MAXSD, and toppra doesn't blow up. This might not be the case for the desired_duration algorithm, as I'm only using the time_optimal algorithm, but I'm running many, pretty large sized trajectories on a 7-DOF manipulator doing complex, straight line and curved trajectories in simulation, and never seeing any issue with ignoring nans. Normally, I'd also be wary of allowing nans to propagate through math, but your inequality checks in the calculations seems to handle these appropriately, and the nans are not causing math errors to occur.

pbeeson commented 6 years ago

This is my current local repo and it seems to be working very well. It always finds an answer (so far -- been running only a few hours now, but that's thousands of "random" trajectories of all sorts) and has minimum times between waypoints of 1 ms.

11:37 $ git diff upstream/master toppra
diff --git a/toppra/_CythonUtils.pyx b/toppra/_CythonUtils.pyx
index 6d75ae4..c6329bb 100644
--- a/toppra/_CythonUtils.pyx
+++ b/toppra/_CythonUtils.pyx
@@ -11,7 +11,7 @@ cdef inline np.float64_t float64_abs(
     FLOAT_t a): return a if a > 0 else - a

 cdef float INFTY = 1e8
-cdef float MAXSD = 100  # Maximum allowable path velocity
+cdef float MAXSD = 1000  # Maximum allowable path velocity

 cpdef _create_velocity_constraint(np.ndarray[double, ndim=2] qs,
                                   np.ndarray[double, ndim=2] vlim):
diff --git a/toppra/algorithm/algorithm.py b/toppra/algorithm/algorithm.py
index 2694e29..f857cab 100644
--- a/toppra/algorithm/algorithm.py
+++ b/toppra/algorithm/algorithm.py
@@ -123,7 +123,7 @@ class ParameterizationAlgorithm(object):
             if sd_average > TINY:
                 delta_t = delta_s / sd_average
             else:
-                delta_t = 5  # If average speed is too slow.
+                delta_t = 0.001  # If average speed is too slow.
             t_grid[i] = t_grid[i - 1] + delta_t
             if delta_t < TINY:  # if a time increment is too small, skip.
                 skip_ent.append(i)
diff --git a/toppra/algorithm/reachabilitybased/reachability_algorithm.py b/toppra/algorithm/reachabilitybased/reachability_algorithm.py
index 72170cb..79af343 100644
--- a/toppra/algorithm/reachabilitybased/reachability_algorithm.py
+++ b/toppra/algorithm/reachabilitybased/reachability_algorithm.py
@@ -137,9 +137,10 @@ class ReachabilityAlgorithm(ParameterizationAlgorithm):
             K[i] = self._one_step(i, K[i + 1])
             if K[i, 0] < 0:
                 K[i, 0] = 0
-            if np.isnan(K[i]).any():
-                logger.warn("K[{:d}]={:}. Path not parametrizable.".format(i, K[i]))
-                return K
+
+            # if np.isnan(K[i]).any():
+            #     logger.warn("K[{:d}]={:}. Path not parametrizable.".format(i, K[i]))
+            #     return K
             if logger.isEnabledFor(logging.DEBUG):
                 logger.debug("[Compute controllable sets] K_{:d}={:}".format(i, K[i]))

@@ -205,12 +206,12 @@ class ReachabilityAlgorithm(ParameterizationAlgorithm):
         """
         assert sd_end >= 0 and sd_start >= 0, "Path velocities must be positive"
         K = self.compute_controllable_sets(sd_end, sd_end)
-        if np.isnan(K).any():
-            logger.warn("The set of controllable velocities at the beginning is empty!")
-            if return_data:
-                return None, None, None, K
-            else:
-                return None, None, None
+        # if np.isnan(K).any():
+        #     logger.warn("The set of controllable velocities at the beginning is empty!")
+        #     if return_data:
+        #         return None, None, None, K
+        #     else:
+        #         return None, None, None

         x_start = sd_start ** 2
         if x_start + SMALL < K[0, 0] or K[0, 1] + SMALL < x_start:
diff --git a/toppra/constants.py b/toppra/constants.py
index 97da017..85a6165 100644
--- a/toppra/constants.py
+++ b/toppra/constants.py
@@ -4,6 +4,6 @@ TINY = 1e-8
 SMALL = 1e-5
 LARGE = 1000.0
 INFTY = 1e8
-MAXU = 10000  # Max limit for `u`
-MAXX = 10000  # Max limit for `x`
-MAXSD = 100   # square root of maxx
+MAXU = 1000000  # Max limit for `u`
+MAXX = 1000000  # Max limit for `x`
+MAXSD = 1000   # square root of maxx
diff --git a/toppra/interpolator.py b/toppra/interpolator.py
index e1ec26c..0e82bce 100644
--- a/toppra/interpolator.py
+++ b/toppra/interpolator.py
@@ -5,6 +5,7 @@ Most are simple wrappers over scipy.interpolators.
 import numpy as np
 from scipy.interpolate import UnivariateSpline, CubicSpline, PPoly
 import logging
+logging.basicConfig()
 logger = logging.getLogger(__name__)
 try:
     import openravepy as orpy
github-actions[bot] commented 4 years ago

Stale issue message