Closed sungsulim closed 6 years ago
Hi,
Can you build a simple, shareable reproduction case for the problem? It seems that it does not load the ground plane properly.
On Thu, Apr 19, 2018, 05:26 Sungsu Lim notifications@github.com wrote:
[image: image] https://user-images.githubusercontent.com/11016779/38969600-516c4e50-434e-11e8-9bfd-fbbcf88a68d0.png
When calling test_env.reset(), it throws an error which I'm not sure how to parse. I'm using HalfCheetahBulletEnv-v0.
I have two different instance of the environment, one for training and one for evaluating. At every N timesteps of training, I do evaluation by running episodes on the test_environment. It does not throw an error when doing train_env.reset() at the beginning.
What would be the issue?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/bulletphysics/bullet3/issues/1643, or mute the thread https://github.com/notifications/unsubscribe-auth/AC97q_rOwhrt1DY8ixwGLIMkuJPEHaDEks5tqAPmgaJpZM4TbF8W .
` import pybullet as p import pybullet_envs import gym
train_env = gym.make('HalfCheetahBulletEnv-v0') test_env = gym.make('HalfCheetahBulletEnv-v0')
train_env.reset() test_env.reset() `
This gives the error:
There will be conflict, since both environments use a global pybullet module handle, and populate it twice etc.
We could solve this using the same solution as in the Minitaur Gym environments by using a local module handle, called BulletClient. For example bullet_client.py and v1 Minitaur using bullet_client.py Then instead of using global p.loadURDF, you use self._bullet_client.loadURDF: all methods are passed through, and the bullet_client adds the right physicsClientHandle.
I find an easy workaround, if we start each env in a new process, this collision will no longer exists. And I guess this is the reason why OpenAI Roboschool doesn't use the official bullet3, they use a forked branch named roboschool_self_collision
instead.
There is no need for Robotschool. The environment needs to be modified to use bulliet_client.py, and then all should work well. In a nutshell, remove all import pybullet, and at one central place import bullet_client, create an instance (connect to GUI, SHARED_MEMORY or DIRECT) and pass that instance around, as replacement for 'p'. Again, this is done in our Minitaur environment. Someone has to do this refactoring.
It is not the reason why roboschool uses a fork. They hack some constants into the physics engine. That is why roboschool will be outdated pretty quickly. Check out benelot/pybullet-gym for the most up-to-date gyms. I made a separate repository to free Erwin from the burden of maintaining the envs as development goes along.
On Fri, May 11, 2018, 17:35 erwincoumans notifications@github.com wrote:
There is no need for Robotschool. The environment needs to be modified to use bulliet_client.py, and then all should work well.
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/bulletphysics/bullet3/issues/1643#issuecomment-388400087, or mute the thread https://github.com/notifications/unsubscribe-auth/AC97qzG5ZHYu249OvwWfPAGIbu3HnnQgks5txa_GgaJpZM4TbF8W .
@benelot Thanks for sharing this, it looks much helpful. I will check it out.
Fixed here: https://github.com/bulletphysics/bullet3/pull/1690 Now you can create multiple of those environments in the same process (same thread or other thread). The Gym environments from this repository ship with PyBullet, including Ant, Hopper, Humanoid, HumanoidFlagrunHarder, InvertedPendulum etc should be stable and considered the reference version.
So you again intend to keep all envs up to date and maintained within the bullet repository? I am again working on making the envs more similar to MuJoCo. My results will all be in pybullet-gym and depend on pybullet. I would suggest to keep it that way to make them less coupled and reduce your maintenance work on them. What do you think Erwin?
On Sat, May 19, 2018 at 1:48 AM erwincoumans notifications@github.com wrote:
Fixed here: #1690 https://github.com/bulletphysics/bullet3/pull/1690 Now you can create multiple of those environments in the same process (same thread or other thread). The Gym environments from this repository ship with PyBullet, including Ant, Hopper, Humanoid, HumanoidFlagrunHarder, InvertedPendulum etc should be stable and considered the reference version.
— You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub https://github.com/bulletphysics/bullet3/issues/1643#issuecomment-390359768, or mute the thread https://github.com/notifications/unsubscribe-auth/AC97qzXsJ6Q4J7uEu6JJ8_I3qhRQ2Qcuks5tz13XgaJpZM4TbF8W .
Your pybullet-gym seems a bit more ambitous with a growing number of environments, Keras support etc. I can't keep up with those developments easily and need to grow my our own environments (Minitaur, KUKA, MIT racecar etc) that I'm more familiar with. Still, those Mujoco locomotion and pendulum envs (Ant, Humanoid, Hopper etc) are pretty important and I can manage and became familiar with your initial port of Roboschool that is now in PyBullet and maintain those as part of PyBullet. I think Roboschool is a dead project and making those envs fully compatible (as in weights are exchangable with the Mujoco versions) would be nice, but then the version needs to bump up.
When calling test_env.reset(), it throws an error which I'm not sure how to parse. I'm using HalfCheetahBulletEnv-v0.
I have two different instance of the environment, one for training and one for evaluating. At every N timesteps of training, I do evaluation by running episodes on the test_environment. It does not throw an error when doing train_env.reset() at the beginning.
What would be the issue?