RobotLocomotion / drake

Model-based design and verification for robotics.
https://drake.mit.edu
Other
3.26k stars 1.26k forks source link

Correct use of TaylorVar? #1147

Closed aespielberg closed 9 years ago

aespielberg commented 9 years ago

Hi,

Sorry for the dumb question, but, when I try to add function handle constraints, my state is converted to a TaylorVar. For example, I make a constraint:

constraint = FunctionHandleConstraint(lb, ub, 10, @(state)positionDiff(plant, frame_name, point_in_frame, target_point, state), 0);

(I don't think all the constraint parameterization here is important, but if you want I can post the function.)

Then, I add it to my program:

prog = prog.addConstraint(constraint,  [last_x_inds; prog.param_inds(:)]);

However, in the function positionDiff (which I have defined), state gets passed in as a TaylorVar.

I'm curious what the correct way to use this TaylorVar, then. I had planned on manipulating the state simply as a double (I thought I had done this correctly in the past but I never used addConstraint, only addStateConstraint, so I'm unsure if it's different). I can't pass in TaylorVar to plant.setParams(), so for my parameterization purposes, I can't use it like this. I also keep getting the following warning when I try to treat it like a double:

Warning: converting taylorvar to double even though it has non-zero gradients.  gradient information will be lost! 
> In TaylorVar/double (line 47)
  In RigidBodyManipulator/forwardKin (line 127)
  In positionDiff (line 43)
  In constructPositionDiffConstraint>@(state)positionDiff(plant,frame_name,point_in_frame,target_point,state) (line 3)
  In FunctionHandleConstraint/constraintEval (line 44)
  In Constraint>@(varargin)obj.constraintEval(varargin{:}) (line 123)
  In geval (line 171)
  In Constraint/eval (line 123)
  In NonlinearProgram/objectiveAndNonlinearConstraints (line 581)
  In NonlinearProgram/snopt/snopt_userfun (line 1314)
  In snoptUserfun (line 6)
  In NonlinearProgram/snopt (line 1377)
  In NonlinearProgram/solve (line 973)
  In DirectTrajectoryOptimization/solveTraj (line 188)
  In KinematicDirtranTest/solveTraj (line 82)
  In AcrobotPlantTest/swingUpTrajectoryKinematic (line 200)
  In runSwingUpKinematicTest (line 6) 

Is there a correct usage here? I imagine the gradient information is useful in some way for returning df in my constraint. I should make an explicit note that the warning is in using TaylorVar in forwardKin, which might not be a big deal since forwardKin returns a gradient anyway. I still don't know the proper way to use it with setParams though. I have semi-solved the problem by directly casting to double.

RussTedrake commented 9 years ago

Just to be clear on how you got here: NonlinearProgram is trying to auto-differentiate your constraint function for you (hence the TaylorVar). This is happening because of the call in your stack to geval (short for gradient eval) which tries autodiff by default if you have not supplied the gradients to the requested order manually. This is a powerful feature. :)

You've hit a case in forwardKin where we were properly supporting the autodiff. I just sent a pull-request with a one line fix. Unfortunately, it means that you found some code that is not covered by our autodiff unit tests, so there may be a few more one-line fixes like this to make it work. Try it and let me know.

Other options are to set the gradient method for the constraint to be 'numerical' or to implement the 'user' gradients yourself. You might look at the help in geval to understand better.

On Jul 8, 2015, at 11:51 PM, aespielberg notifications@github.com wrote:

Hi,

Sorry for the dumb question, but, when I try to add function handle constraints, my state is converted to a TaylorVar. For example, I make a constraint:

constraint = FunctionHandleConstraint(lb, ub, 10, @(state)positionDiff(plant, frame_name, point_in_frame, target_point, state), 0); (I don't think all the constraint parameterization here is important, but if you want I can post the function.)

Then, I add it to my program:

prog = prog.addConstraint(constraint, [last_x_inds; prog.param_inds(:)]); However, in the function positionDiff (which I have defined), state gets passed in as a TaylorVar.

I'm curious what the correct way to use this TaylorVar, then. I had planned on manipulating the state simply as a double (I thought I had done this correctly in the past but I never used addConstraint, only addStateConstraint, so I'm unsure if it's different). I can't pass in TaylorVar to plant.setParams(), so for my parameterization purposes, I can't use it like this. I also keep getting the following warning when I try to treat it like a double:

Warning: converting taylorvar to double even though it has non-zero gradients. gradient information will be lost!

In TaylorVar/double (line 47) In RigidBodyManipulator/forwardKin (line 127) In positionDiff (line 43) In constructPositionDiffConstraint>@(state)positionDiff(plant,frame_name,point_in_frame,target_point,state) (line 3) In FunctionHandleConstraint/constraintEval (line 44) In Constraint>@(varargin)obj.constraintEval(varargin{:}) (line 123) In geval (line 171) In Constraint/eval (line 123) In NonlinearProgram/objectiveAndNonlinearConstraints (line 581) In NonlinearProgram/snopt/snopt_userfun (line 1314) In snoptUserfun (line 6) In NonlinearProgram/snopt (line 1377) In NonlinearProgram/solve (line 973) In DirectTrajectoryOptimization/solveTraj (line 188) In KinematicDirtranTest/solveTraj (line 82) In AcrobotPlantTest/swingUpTrajectoryKinematic (line 200) In runSwingUpKinematicTest (line 6) Is there a correct usage here? I imagine the gradient information is useful in some way for returning df in my constraint.

— Reply to this email directly or view it on GitHub.

aespielberg commented 9 years ago

Hi Russ,

Thank you for the reply. I copied in the line you changed in the pull request, but I still get the same problem:

Warning: converting taylorvar to double even though it has non-zero
gradients.  gradient information will be
lost!
> In TaylorVar/double (line 47)
  In positionDiff (line 20)
  In
constructPositionDiffConstraint>@(state)positionDiff(plant,frame_name,point_in_frame,target_point,state)
(line 3)
  In FunctionHandleConstraint/constraintEval (line 44)
  In Constraint>@(varargin)obj.constraintEval(varargin{:}) (line 123)
  In geval (line 171)
  In Constraint/eval (line 123)
  In NonlinearProgram/objectiveAndNonlinearConstraints (line 581)
  In NonlinearProgram/snopt/snopt_userfun (line 1314)
  In snoptUserfun (line 6)
  In NonlinearProgram/snopt (line 1377)
  In NonlinearProgram/solve (line 973)
  In DirectTrajectoryOptimization/solveTraj (line 188)
  In KinematicDirtranTest/solveTraj (line 82)
  In AcrobotPlantTest/swingUpTrajectoryKinematic (line 200)
  In runSwingUpKinematicTest (line 6)

If I'm understanding you correctly, are you saying that hand-providing the gradients will cause it not to use TaylorVar's, and instead use just the f and df data given and leave everything else as doubles? This is something I originally tried and was doing before I started adding parameters to my model. However, the tricky thing is, as I've mentioned, I have parameterized kinematics, and the parameters are actually inputs to the constraints - and determining derivatives of the location of an object w.r.t. parameters is tricky. The autodiffing sounds very powerful, but I don't understand how these differentiations can really be generated - it sounds like magic if true, but I will read up on geval.

I see the options are something passed into geval. This leads me to two questions - how can I pass them into addConstraint (the documentation doesn't say that's something I can pass in), or is that something I pass into the solver? Secondly, when I define my constraint, I know I am supposed to make it return [f, df]. What happens if I give it a totally bogus df (potentially even the wrong dimensionality?).

Finally, on a related topic, I have my optimization running now - my z is [x for all time; u for all time; params]. I notice that in all its iterations, though, my params are never changing, and therefore the solution is deemed infeasible. Is this related to the fact that I am not providing gradient information with the constraints? Or do I need to explain to it somehow that it has new variables in its z vector that it should try to vary?

Also, (sorry for so many questions), on a related note, how does the solver know there are no feasible solutions?

-Andy S.

Andrew Spielberg PhD Student MIT - Computer Science and Artificial Intelligence Laboratory

On Thu, Jul 9, 2015 at 6:00 AM, Russ Tedrake notifications@github.com wrote:

Just to be clear on how you got here: NonlinearProgram is trying to auto-differentiate your constraint function for you (hence the TaylorVar). This is happening because of the call in your stack to geval (short for gradient eval) which tries autodiff by default if you have not supplied the gradients to the requested order manually. This is a powerful feature. :)

You've hit a case in forwardKin where we were properly supporting the autodiff. I just sent a pull-request with a one line fix. Unfortunately, it means that you found some code that is not covered by our autodiff unit tests, so there may be a few more one-line fixes like this to make it work. Try it and let me know.

Other options are to set the gradient method for the constraint to be 'numerical' or to implement the 'user' gradients yourself. You might look at the help in geval to understand better.

On Jul 8, 2015, at 11:51 PM, aespielberg notifications@github.com wrote:

Hi,

Sorry for the dumb question, but, when I try to add function handle constraints, my state is converted to a TaylorVar. For example, I make a constraint:

constraint = FunctionHandleConstraint(lb, ub, 10, @(state)positionDiff(plant, frame_name, point_in_frame, target_point, state), 0); (I don't think all the constraint parameterization here is important, but if you want I can post the function.)

Then, I add it to my program:

prog = prog.addConstraint(constraint, [last_x_inds; prog.param_inds(:)]); However, in the function positionDiff (which I have defined), state gets passed in as a TaylorVar.

I'm curious what the correct way to use this TaylorVar, then. I had planned on manipulating the state simply as a double (I thought I had done this correctly in the past but I never used addConstraint, only addStateConstraint, so I'm unsure if it's different). I can't pass in TaylorVar to plant.setParams(), so for my parameterization purposes, I can't use it like this. I also keep getting the following warning when I try to treat it like a double:

Warning: converting taylorvar to double even though it has non-zero gradients. gradient information will be lost!

In TaylorVar/double (line 47) In RigidBodyManipulator/forwardKin (line 127) In positionDiff (line 43) In constructPositionDiffConstraint>@(state)positionDiff(plant,frame_name,point_in_frame,target_point,state) (line 3) In FunctionHandleConstraint/constraintEval (line 44) In Constraint>@(varargin)obj.constraintEval(varargin{:}) (line 123) In geval (line 171) In Constraint/eval (line 123) In NonlinearProgram/objectiveAndNonlinearConstraints (line 581) In NonlinearProgram/snopt/snopt_userfun (line 1314) In snoptUserfun (line 6) In NonlinearProgram/snopt (line 1377) In NonlinearProgram/solve (line 973) In DirectTrajectoryOptimization/solveTraj (line 188) In KinematicDirtranTest/solveTraj (line 82) In AcrobotPlantTest/swingUpTrajectoryKinematic (line 200) In runSwingUpKinematicTest (line 6) Is there a correct usage here? I imagine the gradient information is useful in some way for returning df in my constraint.

— Reply to this email directly or view it on GitHub.

— Reply to this email directly or view it on GitHub https://github.com/RobotLocomotion/drake/issues/1147#issuecomment-119894936 .

RussTedrake commented 9 years ago

You can set the grad_method property of the Constraint class. See the documentation next to the property here: https://github.com/RobotLocomotion/drake/blob/master/solvers/Constraint.m

If you provide the gradients incorrectly, then it won't work. Wrong size will probably result in a matlab "dimension mismatch" error. Wrong values will probably result in snopt returning info > 30.

snopt can tell if it's solution violates one of the constraints. so it can report infeasibility of the solution. the limitation is that it is a local optimizer, so it can sometimes return "infeasible" even if a feasible solution does exist, simply because it got stuck in a local optima.

On Jul 10, 2015, at 12:14 AM, aespielberg notifications@github.com wrote:

Hi Russ,

Thank you for the reply. I copied in the line you changed in the pull request, but I still get the same problem:

Warning: converting taylorvar to double even though it has non-zero
gradients. gradient information will be
lost!
> In TaylorVar/double (line 47)
In positionDiff (line 20)
In
constructPositionDiffConstraint>@(state)positionDiff(plant,frame_name,point_in_frame,target_point,state)
(line 3)
In FunctionHandleConstraint/constraintEval (line 44)
In Constraint>@(varargin)obj.constraintEval(varargin{:}) (line 123)
In geval (line 171)
In Constraint/eval (line 123)
In NonlinearProgram/objectiveAndNonlinearConstraints (line 581)
In NonlinearProgram/snopt/snopt_userfun (line 1314)
In snoptUserfun (line 6)
In NonlinearProgram/snopt (line 1377)
In NonlinearProgram/solve (line 973)
In DirectTrajectoryOptimization/solveTraj (line 188)
In KinematicDirtranTest/solveTraj (line 82)
In AcrobotPlantTest/swingUpTrajectoryKinematic (line 200)
In runSwingUpKinematicTest (line 6)

If I'm understanding you correctly, are you saying that hand-providing the gradients will cause it not to use TaylorVar's, and instead use just the f and df data given and leave everything else as doubles? This is something I originally tried and was doing before I started adding parameters to my model. However, the tricky thing is, as I've mentioned, I have parameterized kinematics, and the parameters are actually inputs to the constraints - and determining derivatives of the location of an object w.r.t. parameters is tricky. The autodiffing sounds very powerful, but I don't understand how these differentiations can really be generated - it sounds like magic if true, but I will read up on geval.

I see the options are something passed into geval. This leads me to two questions - how can I pass them into addConstraint (the documentation doesn't say that's something I can pass in), or is that something I pass into the solver? Secondly, when I define my constraint, I know I am supposed to make it return [f, df]. What happens if I give it a totally bogus df (potentially even the wrong dimensionality?).

Finally, on a related topic, I have my optimization running now - my z is [x for all time; u for all time; params]. I notice that in all its iterations, though, my params are never changing, and therefore the solution is deemed infeasible. Is this related to the fact that I am not providing gradient information with the constraints? Or do I need to explain to it somehow that it has new variables in its z vector that it should try to vary?

Also, (sorry for so many questions), on a related note, how does the solver know there are no feasible solutions?

-Andy S.

Andrew Spielberg PhD Student MIT - Computer Science and Artificial Intelligence Laboratory

On Thu, Jul 9, 2015 at 6:00 AM, Russ Tedrake notifications@github.com wrote:

Just to be clear on how you got here: NonlinearProgram is trying to auto-differentiate your constraint function for you (hence the TaylorVar). This is happening because of the call in your stack to geval (short for gradient eval) which tries autodiff by default if you have not supplied the gradients to the requested order manually. This is a powerful feature. :)

You've hit a case in forwardKin where we were properly supporting the autodiff. I just sent a pull-request with a one line fix. Unfortunately, it means that you found some code that is not covered by our autodiff unit tests, so there may be a few more one-line fixes like this to make it work. Try it and let me know.

Other options are to set the gradient method for the constraint to be 'numerical' or to implement the 'user' gradients yourself. You might look at the help in geval to understand better.

On Jul 8, 2015, at 11:51 PM, aespielberg notifications@github.com wrote:

Hi,

Sorry for the dumb question, but, when I try to add function handle constraints, my state is converted to a TaylorVar. For example, I make a constraint:

constraint = FunctionHandleConstraint(lb, ub, 10, @(state)positionDiff(plant, frame_name, point_in_frame, target_point, state), 0); (I don't think all the constraint parameterization here is important, but if you want I can post the function.)

Then, I add it to my program:

prog = prog.addConstraint(constraint, [last_x_inds; prog.param_inds(:)]); However, in the function positionDiff (which I have defined), state gets passed in as a TaylorVar.

I'm curious what the correct way to use this TaylorVar, then. I had planned on manipulating the state simply as a double (I thought I had done this correctly in the past but I never used addConstraint, only addStateConstraint, so I'm unsure if it's different). I can't pass in TaylorVar to plant.setParams(), so for my parameterization purposes, I can't use it like this. I also keep getting the following warning when I try to treat it like a double:

Warning: converting taylorvar to double even though it has non-zero gradients. gradient information will be lost!

In TaylorVar/double (line 47) In RigidBodyManipulator/forwardKin (line 127) In positionDiff (line 43) In constructPositionDiffConstraint>@(state)positionDiff(plant,frame_name,point_in_frame,target_point,state) (line 3) In FunctionHandleConstraint/constraintEval (line 44) In Constraint>@(varargin)obj.constraintEval(varargin{:}) (line 123) In geval (line 171) In Constraint/eval (line 123) In NonlinearProgram/objectiveAndNonlinearConstraints (line 581) In NonlinearProgram/snopt/snopt_userfun (line 1314) In snoptUserfun (line 6) In NonlinearProgram/snopt (line 1377) In NonlinearProgram/solve (line 973) In DirectTrajectoryOptimization/solveTraj (line 188) In KinematicDirtranTest/solveTraj (line 82) In AcrobotPlantTest/swingUpTrajectoryKinematic (line 200) In runSwingUpKinematicTest (line 6) Is there a correct usage here? I imagine the gradient information is useful in some way for returning df in my constraint.

— Reply to this email directly or view it on GitHub.

— Reply to this email directly or view it on GitHub https://github.com/RobotLocomotion/drake/issues/1147#issuecomment-119894936 .

— Reply to this email directly or view it on GitHub.

aespielberg commented 9 years ago

Hi Russ,

Thanks a ton. I set the gradients to be numerical and not user defined and now everything works perfectly with the trajectory optimization.

I would like to be able to provide the derivatives, as I imagine that would save computational time and be more precise. Perhaps I should start a new issue for this, but I am just wondering if there is any way to get forward Kinematics in terms of the parameters? e.g. if I have parameterized lengths on an acrobot, is there a way to get the coordinates of the elbow joint and the COM of link 2 as a symbollic expression in terms of the lengths of link 1 and 2? I am wondering this because then I can add functionality (and eventually push it back to drake) that would automatically generate gradients for parameters by symbolically differentiating it and then substituting in the current parameter values.

If that's not supported, is there a file you would recommend I edit to add this functionality?

-Andy S.

On Fri, Jul 10, 2015 at 4:15 AM, Russ Tedrake notifications@github.com wrote:

You can set the grad_method property of the Constraint class. See the documentation next to the property here: https://github.com/RobotLocomotion/drake/blob/master/solvers/Constraint.m

If you provide the gradients incorrectly, then it won't work. Wrong size will probably result in a matlab "dimension mismatch" error. Wrong values will probably result in snopt returning info > 30.

snopt can tell if it's solution violates one of the constraints. so it can report infeasibility of the solution. the limitation is that it is a local optimizer, so it can sometimes return "infeasible" even if a feasible solution does exist, simply because it got stuck in a local optima.

On Jul 10, 2015, at 12:14 AM, aespielberg notifications@github.com wrote:

Hi Russ,

Thank you for the reply. I copied in the line you changed in the pull request, but I still get the same problem:

Warning: converting taylorvar to double even though it has non-zero
gradients. gradient information will be
lost!
> In TaylorVar/double (line 47)
In positionDiff (line 20)
In

constructPositionDiffConstraint>@(state)positionDiff(plant,frame_name,point_in_frame,target_point,state)
(line 3)
In FunctionHandleConstraint/constraintEval (line 44)
In Constraint>@(varargin)obj.constraintEval(varargin{:}) (line 123)
In geval (line 171)
In Constraint/eval (line 123)
In NonlinearProgram/objectiveAndNonlinearConstraints (line 581)
In NonlinearProgram/snopt/snopt_userfun (line 1314)
In snoptUserfun (line 6)
In NonlinearProgram/snopt (line 1377)
In NonlinearProgram/solve (line 973)
In DirectTrajectoryOptimization/solveTraj (line 188)
In KinematicDirtranTest/solveTraj (line 82)
In AcrobotPlantTest/swingUpTrajectoryKinematic (line 200)
In runSwingUpKinematicTest (line 6)

If I'm understanding you correctly, are you saying that hand-providing the gradients will cause it not to use TaylorVar's, and instead use just the f and df data given and leave everything else as doubles? This is something I originally tried and was doing before I started adding parameters to my model. However, the tricky thing is, as I've mentioned, I have parameterized kinematics, and the parameters are actually inputs to the constraints - and determining derivatives of the location of an object w.r.t. parameters is tricky. The autodiffing sounds very powerful, but I don't understand how these differentiations can really be generated - it sounds like magic if true, but I will read up on geval.

I see the options are something passed into geval. This leads me to two questions - how can I pass them into addConstraint (the documentation doesn't say that's something I can pass in), or is that something I pass into the solver? Secondly, when I define my constraint, I know I am supposed to make it return [f, df]. What happens if I give it a totally bogus df (potentially even the wrong dimensionality?).

Finally, on a related topic, I have my optimization running now - my z is [x for all time; u for all time; params]. I notice that in all its iterations, though, my params are never changing, and therefore the solution is deemed infeasible. Is this related to the fact that I am not providing gradient information with the constraints? Or do I need to explain to it somehow that it has new variables in its z vector that it should try to vary?

Also, (sorry for so many questions), on a related note, how does the solver know there are no feasible solutions?

-Andy S.

Andrew Spielberg PhD Student MIT - Computer Science and Artificial Intelligence Laboratory

On Thu, Jul 9, 2015 at 6:00 AM, Russ Tedrake notifications@github.com wrote:

Just to be clear on how you got here: NonlinearProgram is trying to auto-differentiate your constraint function for you (hence the TaylorVar). This is happening because of the call in your stack to geval (short for gradient eval) which tries autodiff by default if you have not supplied the gradients to the requested order manually. This is a powerful feature. :)

You've hit a case in forwardKin where we were properly supporting the autodiff. I just sent a pull-request with a one line fix. Unfortunately, it means that you found some code that is not covered by our autodiff unit tests, so there may be a few more one-line fixes like this to make it work. Try it and let me know.

Other options are to set the gradient method for the constraint to be 'numerical' or to implement the 'user' gradients yourself. You might look at the help in geval to understand better.

On Jul 8, 2015, at 11:51 PM, aespielberg notifications@github.com wrote:

Hi,

Sorry for the dumb question, but, when I try to add function handle constraints, my state is converted to a TaylorVar. For example, I make a constraint:

constraint = FunctionHandleConstraint(lb, ub, 10, @(state)positionDiff(plant, frame_name, point_in_frame, target_point, state), 0); (I don't think all the constraint parameterization here is important, but if you want I can post the function.)

Then, I add it to my program:

prog = prog.addConstraint(constraint, [last_x_inds; prog.param_inds(:)]); However, in the function positionDiff (which I have defined), state gets passed in as a TaylorVar.

I'm curious what the correct way to use this TaylorVar, then. I had planned on manipulating the state simply as a double (I thought I had done this correctly in the past but I never used addConstraint, only addStateConstraint, so I'm unsure if it's different). I can't pass in TaylorVar to plant.setParams(), so for my parameterization purposes, I can't use it like this. I also keep getting the following warning when I try to treat it like a double:

Warning: converting taylorvar to double even though it has non-zero gradients. gradient information will be lost!

In TaylorVar/double (line 47) In RigidBodyManipulator/forwardKin (line 127) In positionDiff (line 43) In

constructPositionDiffConstraint>@(state)positionDiff(plant,frame_name,point_in_frame,target_point,state) (line 3)

In FunctionHandleConstraint/constraintEval (line 44) In Constraint>@(varargin)obj.constraintEval(varargin{:}) (line 123) In geval (line 171) In Constraint/eval (line 123) In NonlinearProgram/objectiveAndNonlinearConstraints (line 581) In NonlinearProgram/snopt/snopt_userfun (line 1314) In snoptUserfun (line 6) In NonlinearProgram/snopt (line 1377) In NonlinearProgram/solve (line 973) In DirectTrajectoryOptimization/solveTraj (line 188) In KinematicDirtranTest/solveTraj (line 82) In AcrobotPlantTest/swingUpTrajectoryKinematic (line 200) In runSwingUpKinematicTest (line 6) Is there a correct usage here? I imagine the gradient information is useful in some way for returning df in my constraint.

— Reply to this email directly or view it on GitHub.

— Reply to this email directly or view it on GitHub < https://github.com/RobotLocomotion/drake/issues/1147#issuecomment-119894936

.

— Reply to this email directly or view it on GitHub.

— Reply to this email directly or view it on GitHub https://github.com/RobotLocomotion/drake/issues/1147#issuecomment-120286548 .

RussTedrake commented 9 years ago

It would definitely be a good thing to have, but it's not trivial with the current tools.

Here's one (possibly crazy) idea: the way that the parameter estimation examples work is that they call the dynamics once to get back a polynomial (or trigpoly) representation of the dynamics in terms of the state AND parameters. once you have that, it's trivial to take the derivatives that you want. the drawback is that you also pick up some additional constraints (s^2 + c^2 = 1). but you could imagine a workflow where you do something like extractTrigPolySystem but instead do an extractParameterSystem, and the result is a new trigpoly dynamical system that takes the parameters as inputs. or something like that.

lots of things are possible. finding the best solution will just take some brainstorming about system architecture.

On Jul 11, 2015, at 11:03 AM, aespielberg notifications@github.com wrote:

Hi Russ,

Thanks a ton. I set the gradients to be numerical and not user defined and now everything works perfectly with the trajectory optimization.

I would like to be able to provide the derivatives, as I imagine that would save computational time and be more precise. Perhaps I should start a new issue for this, but I am just wondering if there is any way to get forward Kinematics in terms of the parameters? e.g. if I have parameterized lengths on an acrobot, is there a way to get the coordinates of the elbow joint and the COM of link 2 as a symbollic expression in terms of the lengths of link 1 and 2? I am wondering this because then I can add functionality (and eventually push it back to drake) that would automatically generate gradients for parameters by symbolically differentiating it and then substituting in the current parameter values.

If that's not supported, is there a file you would recommend I edit to add this functionality?

-Andy S.

On Fri, Jul 10, 2015 at 4:15 AM, Russ Tedrake notifications@github.com wrote:

You can set the grad_method property of the Constraint class. See the documentation next to the property here: https://github.com/RobotLocomotion/drake/blob/master/solvers/Constraint.m

If you provide the gradients incorrectly, then it won't work. Wrong size will probably result in a matlab "dimension mismatch" error. Wrong values will probably result in snopt returning info > 30.

snopt can tell if it's solution violates one of the constraints. so it can report infeasibility of the solution. the limitation is that it is a local optimizer, so it can sometimes return "infeasible" even if a feasible solution does exist, simply because it got stuck in a local optima.

On Jul 10, 2015, at 12:14 AM, aespielberg notifications@github.com wrote:

Hi Russ,

Thank you for the reply. I copied in the line you changed in the pull request, but I still get the same problem:

Warning: converting taylorvar to double even though it has non-zero
gradients. gradient information will be
lost!
> In TaylorVar/double (line 47)
In positionDiff (line 20)
In

constructPositionDiffConstraint>@(state)positionDiff(plant,frame_name,point_in_frame,target_point,state)
(line 3)
In FunctionHandleConstraint/constraintEval (line 44)
In Constraint>@(varargin)obj.constraintEval(varargin{:}) (line 123)
In geval (line 171)
In Constraint/eval (line 123)
In NonlinearProgram/objectiveAndNonlinearConstraints (line 581)
In NonlinearProgram/snopt/snopt_userfun (line 1314)
In snoptUserfun (line 6)
In NonlinearProgram/snopt (line 1377)
In NonlinearProgram/solve (line 973)
In DirectTrajectoryOptimization/solveTraj (line 188)
In KinematicDirtranTest/solveTraj (line 82)
In AcrobotPlantTest/swingUpTrajectoryKinematic (line 200)
In runSwingUpKinematicTest (line 6)

If I'm understanding you correctly, are you saying that hand-providing the gradients will cause it not to use TaylorVar's, and instead use just the f and df data given and leave everything else as doubles? This is something I originally tried and was doing before I started adding parameters to my model. However, the tricky thing is, as I've mentioned, I have parameterized kinematics, and the parameters are actually inputs to the constraints - and determining derivatives of the location of an object w.r.t. parameters is tricky. The autodiffing sounds very powerful, but I don't understand how these differentiations can really be generated - it sounds like magic if true, but I will read up on geval.

I see the options are something passed into geval. This leads me to two questions - how can I pass them into addConstraint (the documentation doesn't say that's something I can pass in), or is that something I pass into the solver? Secondly, when I define my constraint, I know I am supposed to make it return [f, df]. What happens if I give it a totally bogus df (potentially even the wrong dimensionality?).

Finally, on a related topic, I have my optimization running now - my z is [x for all time; u for all time; params]. I notice that in all its iterations, though, my params are never changing, and therefore the solution is deemed infeasible. Is this related to the fact that I am not providing gradient information with the constraints? Or do I need to explain to it somehow that it has new variables in its z vector that it should try to vary?

Also, (sorry for so many questions), on a related note, how does the solver know there are no feasible solutions?

-Andy S.

Andrew Spielberg PhD Student MIT - Computer Science and Artificial Intelligence Laboratory

On Thu, Jul 9, 2015 at 6:00 AM, Russ Tedrake notifications@github.com wrote:

Just to be clear on how you got here: NonlinearProgram is trying to auto-differentiate your constraint function for you (hence the TaylorVar). This is happening because of the call in your stack to geval (short for gradient eval) which tries autodiff by default if you have not supplied the gradients to the requested order manually. This is a powerful feature. :)

You've hit a case in forwardKin where we were properly supporting the autodiff. I just sent a pull-request with a one line fix. Unfortunately, it means that you found some code that is not covered by our autodiff unit tests, so there may be a few more one-line fixes like this to make it work. Try it and let me know.

Other options are to set the gradient method for the constraint to be 'numerical' or to implement the 'user' gradients yourself. You might look at the help in geval to understand better.

On Jul 8, 2015, at 11:51 PM, aespielberg notifications@github.com wrote:

Hi,

Sorry for the dumb question, but, when I try to add function handle constraints, my state is converted to a TaylorVar. For example, I make a constraint:

constraint = FunctionHandleConstraint(lb, ub, 10, @(state)positionDiff(plant, frame_name, point_in_frame, target_point, state), 0); (I don't think all the constraint parameterization here is important, but if you want I can post the function.)

Then, I add it to my program:

prog = prog.addConstraint(constraint, [last_x_inds; prog.param_inds(:)]); However, in the function positionDiff (which I have defined), state gets passed in as a TaylorVar.

I'm curious what the correct way to use this TaylorVar, then. I had planned on manipulating the state simply as a double (I thought I had done this correctly in the past but I never used addConstraint, only addStateConstraint, so I'm unsure if it's different). I can't pass in TaylorVar to plant.setParams(), so for my parameterization purposes, I can't use it like this. I also keep getting the following warning when I try to treat it like a double:

Warning: converting taylorvar to double even though it has non-zero gradients. gradient information will be lost!

In TaylorVar/double (line 47) In RigidBodyManipulator/forwardKin (line 127) In positionDiff (line 43) In

constructPositionDiffConstraint>@(state)positionDiff(plant,frame_name,point_in_frame,target_point,state) (line 3)

In FunctionHandleConstraint/constraintEval (line 44) In Constraint>@(varargin)obj.constraintEval(varargin{:}) (line 123) In geval (line 171) In Constraint/eval (line 123) In NonlinearProgram/objectiveAndNonlinearConstraints (line 581) In NonlinearProgram/snopt/snopt_userfun (line 1314) In snoptUserfun (line 6) In NonlinearProgram/snopt (line 1377) In NonlinearProgram/solve (line 973) In DirectTrajectoryOptimization/solveTraj (line 188) In KinematicDirtranTest/solveTraj (line 82) In AcrobotPlantTest/swingUpTrajectoryKinematic (line 200) In runSwingUpKinematicTest (line 6) Is there a correct usage here? I imagine the gradient information is useful in some way for returning df in my constraint.

— Reply to this email directly or view it on GitHub.

— Reply to this email directly or view it on GitHub < https://github.com/RobotLocomotion/drake/issues/1147#issuecomment-119894936

.

— Reply to this email directly or view it on GitHub.

— Reply to this email directly or view it on GitHub https://github.com/RobotLocomotion/drake/issues/1147#issuecomment-120286548 .

— Reply to this email directly or view it on GitHub.

aespielberg commented 9 years ago

Hi Russ,

Thanks for the idea. I think I get high-level what you mean; I'll have to play around with it. I'll keep thinking about this on the backburner and try something where I use the dynamics after I make a little more progress. At any rate, I think numerical derivatives are working well enough for now, so I'll mark this as resolved and come back to this soon.

-Andy S.

Andrew Spielberg PhD Student MIT - Computer Science and Artificial Intelligence Laboratory

On Sat, Jul 11, 2015 at 11:25 AM, Russ Tedrake notifications@github.com wrote:

It would definitely be a good thing to have, but it's not trivial with the current tools.

Here's one (possibly crazy) idea: the way that the parameter estimation examples work is that they call the dynamics once to get back a polynomial (or trigpoly) representation of the dynamics in terms of the state AND parameters. once you have that, it's trivial to take the derivatives that you want. the drawback is that you also pick up some additional constraints (s^2 + c^2 = 1). but you could imagine a workflow where you do something like extractTrigPolySystem but instead do an extractParameterSystem, and the result is a new trigpoly dynamical system that takes the parameters as inputs. or something like that.

lots of things are possible. finding the best solution will just take some brainstorming about system architecture.

On Jul 11, 2015, at 11:03 AM, aespielberg notifications@github.com wrote:

Hi Russ,

Thanks a ton. I set the gradients to be numerical and not user defined and now everything works perfectly with the trajectory optimization.

I would like to be able to provide the derivatives, as I imagine that would save computational time and be more precise. Perhaps I should start a new issue for this, but I am just wondering if there is any way to get forward Kinematics in terms of the parameters? e.g. if I have parameterized lengths on an acrobot, is there a way to get the coordinates of the elbow joint and the COM of link 2 as a symbollic expression in terms of the lengths of link 1 and 2? I am wondering this because then I can add functionality (and eventually push it back to drake) that would automatically generate gradients for parameters by symbolically differentiating it and then substituting in the current parameter values.

If that's not supported, is there a file you would recommend I edit to add this functionality?

-Andy S.

On Fri, Jul 10, 2015 at 4:15 AM, Russ Tedrake notifications@github.com wrote:

You can set the grad_method property of the Constraint class. See the documentation next to the property here:

https://github.com/RobotLocomotion/drake/blob/master/solvers/Constraint.m

If you provide the gradients incorrectly, then it won't work. Wrong size will probably result in a matlab "dimension mismatch" error. Wrong values will probably result in snopt returning info > 30.

snopt can tell if it's solution violates one of the constraints. so it can report infeasibility of the solution. the limitation is that it is a local optimizer, so it can sometimes return "infeasible" even if a feasible solution does exist, simply because it got stuck in a local optima.

On Jul 10, 2015, at 12:14 AM, aespielberg notifications@github.com wrote:

Hi Russ,

Thank you for the reply. I copied in the line you changed in the pull request, but I still get the same problem:

Warning: converting taylorvar to double even though it has non-zero
gradients. gradient information will be
lost!
> In TaylorVar/double (line 47)
In positionDiff (line 20)
In

constructPositionDiffConstraint>@(state)positionDiff(plant,frame_name,point_in_frame,target_point,state)

(line 3) In FunctionHandleConstraint/constraintEval (line 44) In Constraint>@(varargin)obj.constraintEval(varargin{:}) (line 123) In geval (line 171) In Constraint/eval (line 123) In NonlinearProgram/objectiveAndNonlinearConstraints (line 581) In NonlinearProgram/snopt/snopt_userfun (line 1314) In snoptUserfun (line 6) In NonlinearProgram/snopt (line 1377) In NonlinearProgram/solve (line 973) In DirectTrajectoryOptimization/solveTraj (line 188) In KinematicDirtranTest/solveTraj (line 82) In AcrobotPlantTest/swingUpTrajectoryKinematic (line 200) In runSwingUpKinematicTest (line 6)


If I'm understanding you correctly, are you saying that
hand-providing
the
gradients will cause it not to use TaylorVar's, and instead use just
the
f
and df data given and leave everything else as doubles? This is
something
I originally tried and was doing before I started adding parameters
to my
model. However, the tricky thing is, as I've mentioned, I have
parameterized kinematics, and the parameters are actually inputs to
the
constraints - and determining derivatives of the location of an
object
w.r.t. parameters is tricky. The autodiffing sounds very powerful,
but I
don't understand how these differentiations can really be generated
- it
sounds like magic if true, but I will read up on geval.

I see the options are something passed into geval. This leads me to
two
questions - how can I pass them into addConstraint (the documentation
doesn't say that's something I can pass in), or is that something I
pass
into the solver? Secondly, when I define my constraint, I know I am
supposed to make it return [f, df]. What happens if I give it a
totally
bogus df (potentially even the wrong dimensionality?).

Finally, on a related topic, I have my optimization running now - my
z is
[x for all time; u for all time; params]. I notice that in all its
iterations, though, my params are never changing, and therefore the
solution is deemed infeasible. Is this related to the fact that I am
not
providing gradient information with the constraints? Or do I need to
explain to it somehow that it has new variables in its z vector that
it
should try to vary?

Also, (sorry for so many questions), on a related note, how does the
solver
know there are no feasible solutions?

-Andy S.

Andrew Spielberg
PhD Student
MIT - Computer Science and Artificial Intelligence Laboratory

On Thu, Jul 9, 2015 at 6:00 AM, Russ Tedrake <
notifications@github.com>
wrote:

>
> Just to be clear on how you got here: NonlinearProgram is trying to
> auto-differentiate your constraint function for you (hence the
TaylorVar).
> This is happening because of the call in your stack to geval
(short for
> gradient eval) which tries autodiff by default if you have not
supplied the
> gradients to the requested order manually. This is a powerful
feature.
:)
>
> You've hit a case in forwardKin where we were properly supporting
the
> autodiff. I just sent a pull-request with a one line fix.
Unfortunately, it
> means that you found some code that is not covered by our autodiff
unit
> tests, so there may be a few more one-line fixes like this to make
it
work.
> Try it and let me know.
>
> Other options are to set the gradient method for the constraint to
be
> 'numerical' or to implement the 'user' gradients yourself. You
might
look
> at the help in geval to understand better.
>
> On Jul 8, 2015, at 11:51 PM, aespielberg <notifications@github.com

wrote:

Hi,

Sorry for the dumb question, but, when I try to add function handle constraints, my state is converted to a TaylorVar. For example, I make a constraint:

constraint = FunctionHandleConstraint(lb, ub, 10, @(state)positionDiff(plant, frame_name, point_in_frame, target_point, state), 0); (I don't think all the constraint parameterization here is important, but if you want I can post the function.)

Then, I add it to my program:

prog = prog.addConstraint(constraint, [last_x_inds; prog.param_inds(:)]); However, in the function positionDiff (which I have defined), state gets passed in as a TaylorVar.

I'm curious what the correct way to use this TaylorVar, then. I had planned on manipulating the state simply as a double (I thought I had done this correctly in the past but I never used addConstraint, only addStateConstraint, so I'm unsure if it's different). I can't pass in TaylorVar to plant.setParams(), so for my parameterization purposes, I can't use it like this. I also keep getting the following warning when I try to treat it like a double:

Warning: converting taylorvar to double even though it has non-zero gradients. gradient information will be lost!

In TaylorVar/double (line 47) In RigidBodyManipulator/forwardKin (line 127) In positionDiff (line 43) In

constructPositionDiffConstraint>@(state)positionDiff(plant,frame_name,point_in_frame,target_point,state)

(line 3)

In FunctionHandleConstraint/constraintEval (line 44) In Constraint>@(varargin)obj.constraintEval(varargin{:}) (line 123) In geval (line 171) In Constraint/eval (line 123) In NonlinearProgram/objectiveAndNonlinearConstraints (line 581) In NonlinearProgram/snopt/snopt_userfun (line 1314) In snoptUserfun (line 6) In NonlinearProgram/snopt (line 1377) In NonlinearProgram/solve (line 973) In DirectTrajectoryOptimization/solveTraj (line 188) In KinematicDirtranTest/solveTraj (line 82) In AcrobotPlantTest/swingUpTrajectoryKinematic (line 200) In runSwingUpKinematicTest (line 6) Is there a correct usage here? I imagine the gradient information is useful in some way for returning df in my constraint.

— Reply to this email directly or view it on GitHub.

— Reply to this email directly or view it on GitHub <

https://github.com/RobotLocomotion/drake/issues/1147#issuecomment-119894936

.

— Reply to this email directly or view it on GitHub.

— Reply to this email directly or view it on GitHub < https://github.com/RobotLocomotion/drake/issues/1147#issuecomment-120286548

.

— Reply to this email directly or view it on GitHub.

— Reply to this email directly or view it on GitHub https://github.com/RobotLocomotion/drake/issues/1147#issuecomment-120633441 .