Closed casparvv closed 2 years ago
But q0
and q0dot
are never owerwritten during an trial, so self.reset(q0,q0dot)
should be sufficient, doesn't it?
True, however, it seems that the initial action is different if only reset with self.reset(q0,q0dot)
. I think this small difference in the initial action leads to a different result.
Might be that this is only a problem for the sensor fabric application due to not resetting the planner properly, in this case rerunning q0, q0dot = self._experiment.initState()
for every planner solves the problem.
I will test it with a more general case to see what happens.
Might be that this is only a problem for the sensor fabric application due to not resetting the planner properly, in this case rerunning q0, q0dot = self._experiment.initState() for every planner solves the problem.
Make sure that q0
and q0dot
are never overwritten during the running of a trial.
Tested it with the point robot example, and it does not seem to be a problem indeed, so I will close this issue.
Will check my code for overwriting of q0
and q0dot
.
When rerunning an experiment from a study with multiple planners, the result is not always exactly the same, it seems that the initial state is not correctly reset. I ran a study with multiple fabric planners, rerunning the planner that is executed first, gives exactly the same results. However, when rerunning the second or one of the following planners, gives slightly different results.
This might be due to line 172 below, which is declared before looping over the planners. https://github.com/maxspahn/localPlannerBench/blob/a728b3b4ab93e04ea2786f312575aaa555dbe2fb/plannerbenchmark/exec/runner.py#L172-L177
Not sure, but it seems that setting line 172 in between 176 and 177 solves the issue.