stepjam / RLBench

A large-scale benchmark and learning environment.
https://sites.google.com/corp/view/rlbench
Other
1.16k stars 235 forks source link

RLBenchEnv related issues #116

Closed ShangqunYu closed 3 years ago

ShangqunYu commented 3 years ago
  1. Is there a way I can create multiple environment ? Currently, If I try to make more than 1 environment, I got following error:
    env = gym.make('reach_target-state-v0')
    env2 = gym.make('reach_target-state-v0')
Attribute Qt::AA_UseDesktopOpenGL must be set before QCoreApplication is created.
WARNING: QApplication was not created in the main() thread.

(python3:49361): GLib-CRITICAL **: 17:55:14.764: g_main_context_push_thread_default: assertion 'acquired_context' failed
  1. What is the proper way to change action mode when I am using RLBenchEnv? Right now, I feel like there is no easy way to change action mode, I can also change the code by doing modification to the RLbench code, but I really want to avoid making change on the original code. Currently I use the following code to change action mode, but I wonder if it may cause some error since during the initialization stage, the environment is initialized under the default action mode, so some setting can be different, such as self._robot.arm.set_control_loop_enabled
    env = gym.make('reach_target-state-v0')
    env.env._action_mode = ArmActionMode.EE_POSE_PLAN_EE_FRAME 

    Thanks! :)

stepjam commented 3 years ago

Hi. 1) Each RLBench env needs to be created in a new process. See here: https://github.com/stepjam/PyRep#running-multiple-pyrep-instances

2) You select the action mode when defining the environment. See the examples, e.g. https://github.com/stepjam/RLBench/blob/master/examples/single_task_rl.py#L23

ShangqunYu commented 3 years ago

Thanks for getting back to me Stephen:) Regarding question 2, I have seen the example, but like I have mentioned, I would like to use RLBenchEnv which inherits from gym instead of Environment, because we have been trying to use some existing open source algorithm such as stable baseline as our baseline. Shall I just make an env by using gym.make and create a new environment class with the desired action mode, and replace the original environment from the env with the new environment I created? thanks:)

stepjam commented 3 years ago

Sorry for delay. I assume this has been resolved. Feel free to reopen if needed!

xf-zhao commented 2 years ago

Hi, have you found a way to use gym-like env with specified action-mode? This issue has been marked as closed but I did not see your question been solved.

wyd0817 commented 1 year ago

Thanks for getting back to me Stephen:) Regarding question 2, I have seen the example, but like I have mentioned, I would like to use RLBenchEnv which inherits from gym instead of Environment, because we have been trying to use some existing open source algorithm such as stable baseline as our baseline. Shall I just make an env by using gym.make and create a new environment class with the desired action mode, and replace the original environment from the env with the new environment I created? thanks:)

We also encountered the same issue, how did you solve it?

qiwang067 commented 1 year ago

Hi, Stephen, I tried to make two gym environments and saw your provided example, but it's a bit difficult for me to implement, could you show me more details about making two gym environments?