Open adamhall opened 1 year ago
But for the control methods, this can be tricky since the cost function is part of both the control algorithm and the environment. The ideal case is we have a clear boundary between what's given as the environment/task (which will be used in evaluation) and what's part of the control algorithm. I'd say the cost itself (nonlinear quadratic) is still part of the task side (since we need it in evaluation anyways), but anything that uses linearization (needed in algo optimizations) can use the prior.
@adamhall Do we currently have anywhere that needs to be fixed regarding this issue?
@adamhall @Justin-Yuan status?
I am leaning towards using symbolic.U_EQ
for linearization and env.U_EQ
for cost function or reward, the current/updated symbolic model should already be able to expose U_EQ
, but I'm not sure if the MPC controllers have been updated to use them as well? @adamhall
Carrying on from #93.
Many controllers have been using
env.U_GOAL
either as a linearization point or in the cost which uses the true system mass, not the prior mass, which is whatsymbolic.U_EQ
is for. Should controllers have access toU_GOAL
or they should exclusively be usingU_EQ
. @Justin-Yuan What are your thoughts on this for the RL cost and Normalization?