Closed cambel closed 4 years ago
It looks like your environment can't be pickled. I recommend removing the env from the snapshot that is being saved.
See rlkit/core/rl_algorithm.py, line 56
@vitchyr Thanks for your reply.
I removed the env from the snapshot as you suggest but then I got a different error
ephoc... 0 2019-09-27 14:23:38.024797 JST | [name-of-experiment_2019_09_27_14_22_13_0000--s-0] Epoch 0 finished [{}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {'TimeLimit.truncated': True}] Traceback (most recent call last): File "/home/cambel/ws_ur5/src/ur3/ur3e_openai/scripts/sac.py", line 129, in
experiment(variant) File "/home/cambel/ws_ur5/src/ur3/ur3e_openai/scripts/sac.py", line 96, in experiment algorithm.train() File "/home/cambel/dev/rlkit/rlkit/core/rl_algorithm.py", line 46, in train self._train() File "/home/cambel/dev/rlkit/rlkit/core/batch_rl_algorithm.py", line 84, in _train self._end_epoch(epoch) File "/home/cambel/dev/rlkit/rlkit/core/rl_algorithm.py", line 62, in _end_epoch self._log_stats(epoch) File "/home/cambel/dev/rlkit/rlkit/core/rl_algorithm.py", line 114, in _log_stats eval_util.get_generic_path_information(expl_paths), File "/home/cambel/dev/rlkit/rlkit/core/eval_util.py", line 40, in get_generic_path_information for p in paths File "/home/cambel/dev/rlkit/rlkit/core/eval_util.py", line 40, in for p in paths File "/home/cambel/dev/rlkit/rlkit/pythonplusplus.py", line 166, in list_of_dictstodict_of_lists assert set(d.keys()) == set(keys) AssertionError
The epoch is empty, do you know why that might be?
You need to make sure the environment adheres to the MultitaskEnv interface. See https://github.com/vitchyr/multiworld/
I created a Gym environment for a robot in ROS-Gazebo. Now, I want to train it using SAC.
I am trying to implement the SAC agent based on the example but I get the following error
I'm working on Ubuntu 16.04. I was using pytorch v0.4.1, then I also tried with the lasted version of pytorch but got the same results.
I am not using images, the state of the robot is just an array of the position of it's end-effector.
Has anyone had this same problem?