Open hai-h-nguyen opened 5 years ago
I'm a little unclear about the question. Are you trying one of our examples? If not, is that a simulated task?
For all our real-world robot tasks, we do inherit gym.core.Env
.
For example, with the UR5 arm,
ReacherEnv
inherits the gym core env (link)As for registering the env, it's needed only when you'd like to use env = gym.make("custom_env_name")
. We did that with our DoubleInvertedPendulumEnv
. (link)
I'm assuming that you're trying to use the baselines implementation of DDPG. Let me know if you have any other questions.
I have a different robot but I modified the code so that it can work. However, I want to try a different algorithm (DDPG + HER) as it should be faster than TRPO. HER uses gym make env function so I think I can follow your suggestion.
Another question, my code has a problem when running for a number of hours or so. The thread _sensor_handler and actuator_handler stop running after a while (even it was running fine after one hour or so). What might be the possible reasons for that?
This is a typical error:
WARNING:root:Agent has over-run its allocated dt, it has been 0.28047633171081543 since the last observation, 0.24047633171081542 more than allowed Resetting Reset done Resetting Reset done Resetting Reset done Resetting Reset done Resetting
It just keeps looping between these. As the commands are not sent to the robot (the actuator_handler thread stops), the robot does not move at all. I also checked that the sensor_handler also stops running.
Is it possible for you to share some code snippets or elaborate on what you are trying to do? I have seen such errors when python multi-processing code was setup incorrectly.
Thanks! Please look at the code at https://github.com/hhn1n15/SenseAct_Aubo. Basically right now I am trying to replicate your results (using TRPO) with a new robot (Aubo robot). I added new device aubo, created an aubo_reacher (based on ur_reacher). Most of the code stays the same.
The dt may overrun if expensive learning updates are done sequentially among many other reasons. It is not that bothersome to have it say once in every few minutes. However, if this happens more often, two options can be to compute the update more efficiently using powerful computers or make the learning updates asynchronously using a different process.
Are the handlers stopping even when you are running TRPO or PPO?
I suggest getting it learning first with TRPO or PPO using the example script before moving to HER. Getting effective learning with a new robot is no trivial job and would be glad to see this working!
I haven't tried DDPG+HER yet. The two handlers stops even with the original code using TRPO. Actually, the communicator stops making the two threads stop.
I want to replace the TRPO with DDPG + HER and am having difficulties. The combination only works with a task that is registered with Gym. How did TRPO avoid that?