Open gtatiya opened 6 years ago
Ah ha! This is a good find! We don't often test these environments on windows, and I think you've found a bug in how we load the environment.
For now can you verify that the Humanoid-v2
environment works for you?
Ah ha! This is a good find! We don't often test these environments on windows, and I think you've found a bug in how we load the environment.
For now can you verify that the
Humanoid-v2
environment works for you?
Yes, Humanoid-v2
works! But, I got the same error for HandReach-v0
.
if you checkout gym
and apply this patch manually, does it work https://github.com/openai/gym/pull/1220 ?
it will probably get merged after some review, but it would be good to test it now.
I manually patched for fetch and hand. But, got the same error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-616e18e9e820> in <module>
----> 1 env = gym.make('FetchReach-v1')
2 print(env.reset())
3 env.render()
c:\users\gyant\downloads\compressed\gym-master\gym\envs\registration.py in make(id)
165
166 def make(id):
--> 167 return registry.make(id)
168
169 def spec(id):
c:\users\gyant\downloads\compressed\gym-master\gym\envs\registration.py in make(self, id)
117 logger.info('Making new env: %s', id)
118 spec = self.spec(id)
--> 119 env = spec.make()
120 # We used to have people override _reset/_step rather than
121 # reset/step. Set _gym_disable_underscore_compat = True on
c:\users\gyant\downloads\compressed\gym-master\gym\envs\registration.py in make(self)
84 else:
85 cls = load(self._entry_point)
---> 86 env = cls(**self._kwargs)
87
88 # Make the enviroment aware of which spec it came from.
c:\users\gyant\downloads\compressed\gym-master\gym\envs\robotics\fetch\reach.py in __init__(self, reward_type)
19 gripper_extra_height=0.2, target_in_the_air=True, target_offset=0.0,
20 obj_range=0.15, target_range=0.15, distance_threshold=0.05,
---> 21 initial_qpos=initial_qpos, reward_type=reward_type)
22 utils.EzPickle.__init__(self)
c:\users\gyant\downloads\compressed\gym-master\gym\envs\robotics\fetch_env.py in __init__(self, model_path, n_substeps, gripper_extra_height, block_gripper, has_object, target_in_the_air, target_offset, obj_range, target_range, distance_threshold, initial_qpos, reward_type)
46 super(FetchEnv, self).__init__(
47 model_path=model_path, n_substeps=n_substeps, n_actions=4,
---> 48 initial_qpos=initial_qpos)
49
50 # GoalEnv methods
c:\users\gyant\downloads\compressed\gym-master\gym\envs\robotics\robot_env.py in __init__(self, model_path, initial_qpos, n_actions, n_substeps)
32
33 self.seed()
---> 34 self._env_setup(initial_qpos=initial_qpos)
35 self.initial_state = copy.deepcopy(self.sim.get_state())
36
c:\users\gyant\downloads\compressed\gym-master\gym\envs\robotics\fetch_env.py in _env_setup(self, initial_qpos)
170 def _env_setup(self, initial_qpos):
171 for name, value in initial_qpos.items():
--> 172 self.sim.data.set_joint_qpos(name, value)
173 utils.reset_mocap_welds(self.sim)
174 self.sim.forward()
c:\anaconda2\envs\py3_6\lib\site-packages\mujoco_py-1.50.1.68-py3.6.egg\mujoco_py\generated\wrappers.pxi in mujoco_py.cymj.PyMjData.set_joint_qpos()
TypeError: 'numpy.int32' object is not iterable
Can you verify that you can load the XML in mujoco (without python) using the simulate
binary?
Can you verify that you can load the XML in mujoco (without python) using the
simulate
binary?
Yes, I can run simulate ../model/humanoid.xml
Can you verify that you can load the XML in mujoco (without python) using the
simulate
binary?
Could you please help me.
@gtatiya can you manually load the XMLs from the fetch and hand environments in mujoco-py?
It would help me if you could provide more context on this error here:
c:\users\gyant\downloads\compressed\gym-master\gym\envs\robotics\fetch_env.py in _env_setup(self, initial_qpos)
170 def _env_setup(self, initial_qpos):
171 for name, value in initial_qpos.items():
--> 172 self.sim.data.set_joint_qpos(name, value)
173 utils.reset_mocap_welds(self.sim)
174 self.sim.forward()
c:\anaconda2\envs\py3_6\lib\site-packages\mujoco_py-1.50.1.68-py3.6.egg\mujoco_py\generated\wrappers.pxi in mujoco_py.cymj.PyMjData.set_joint_qpos()
TypeError: 'numpy.int32' object is not iterable
In particular, could you provide what name, and value are when that line fails, and provide the model (sim.model)? Thanks
I just start play with these environments today and ran into same errors, hope these will help.
So my error message said something like this
File "mujoco_py\generated/wrappers.pxi", line 2547, in mujoco_py.cymj.PyMjData.set_joint_qpos TypeError: 'numpy.int32' object is not iterable
so I check wrappers.pxi, line 2547, start_i, end_i = addr
addr was came from line 2535, addr = self._model.get_joint_qpos_addr(name).
The error was caused by addr is a numpy.int32, not a tuple, and when ndim =1, get_joint_qpos_addr return addr instead of (addr, addr+ndim) , which is the reason of this error.
@gtatiya can you manually load the XMLs from the fetch and hand environments in mujoco-py?
It would help me if you could provide more context on this error here:
c:\users\gyant\downloads\compressed\gym-master\gym\envs\robotics\fetch_env.py in _env_setup(self, initial_qpos) 170 def _env_setup(self, initial_qpos): 171 for name, value in initial_qpos.items(): --> 172 self.sim.data.set_joint_qpos(name, value) 173 utils.reset_mocap_welds(self.sim) 174 self.sim.forward() c:\anaconda2\envs\py3_6\lib\site-packages\mujoco_py-1.50.1.68-py3.6.egg\mujoco_py\generated\wrappers.pxi in mujoco_py.cymj.PyMjData.set_joint_qpos() TypeError: 'numpy.int32' object is not iterable
In particular, could you provide what name, and value are when that line fails, and provide the model (sim.model)? Thanks
Yes, simulate reach.xml
works.
Here's the output you asked:
name: robot0:slide0
value: 0.4049
self.sim.model: <mujoco_py.cymj.PyMjModel object at 0x000001A7145DF908>
i've gone to the same problem...
@gtatiya can you manually load the XMLs from the fetch and hand environments in mujoco-py?
It would help me if you could provide more context on this error here:
c:\users\gyant\downloads\compressed\gym-master\gym\envs\robotics\fetch_env.py in _env_setup(self, initial_qpos) 170 def _env_setup(self, initial_qpos): 171 for name, value in initial_qpos.items(): --> 172 self.sim.data.set_joint_qpos(name, value) 173 utils.reset_mocap_welds(self.sim) 174 self.sim.forward() c:\anaconda2\envs\py3_6\lib\site-packages\mujoco_py-1.50.1.68-py3.6.egg\mujoco_py\generated\wrappers.pxi in mujoco_py.cymj.PyMjData.set_joint_qpos() TypeError: 'numpy.int32' object is not iterable
In particular, could you provide what name, and value are when that line fails, and provide the model (sim.model)? Thanks
Could you please help me.
@gtatiya I've not been able to reproduce anything like this, so I still am confused as to the source of the error.
Do you know what the numpy.int32
object in question is?
@machinaut in wrappers.pxi, line 1050 - 1052
Returns:
- address (int, tuple): returns int address if 1-dim joint, otherwise
returns the a (start, end) tuple for pos[start:end] access.
I think addr is the numpy.int32 object that causes error.
I ran into the same issue. I experimented with wrappers.pxi.
in set_joint_qpos(self, name, value)
(line 2542):
print(type(addr), isinstance(addr, (int, np.int32, np.int64)))
prints <class 'numpy.int32'> False
.
Replacing all instances of isinstance(addr, (int, np.int32, np.int64)))
with hasattr(addr, '__int__')
seems to fix the issue. I am however not sure how wrappers.pxi
is generated and what should be the proper change to fix the issue.
@VengeurK There's a script (probably easy to find) which generates this, https://github.com/openai/mujoco-py/blob/master/scripts/gen_wrappers.py
I can confirm @VengeurK's fix works for me too, I'm preparing a PR that fixes the windows build.
I still encounter the same bug. Any update now?
@Altriaex Are you able to try my branch referenced in the pull request above? OpenAI hasn't responded to the PR, but it may fix this issue for you.
@aaronsnoswell It turns out that, I need to recompile it, after editing files, as you mentioned in https://github.com/openai/mujoco-py/compare/master...aaronsnoswell:fix-windows-support
Thank you!
I met the same problem using mujoco_py on Windows, but the lines I need to change were around 1054, and there were about 4 places to replace. My mujoco_py version is 1.50.1.0.
Describe the bug I'm able to use env that requires mujoco such as 'CartPole-v1', but can't render any of Fetch robotic env.
To Reproduce
Error Messages
Desktop (please complete the following information):
Please Help !!!