jr-robotics / robo-gym

An open source toolkit for Distributed Deep Reinforcement Learning on real and simulated robots.
https://sites.google.com/view/robo-gym
MIT License
390 stars 74 forks source link

Example of `NoObstacleNavigationMir100Sim-v0` isn't working #51

Closed Rowing0914 closed 2 years ago

Rowing0914 commented 2 years ago

Hi,

Thank you for your effort to make the community better by providing this support for DRL for robotics in industry! I've got weird error that I can't fix myself...

# From L 115 of mir100.py
robo_gym.utils.exceptions.InvalidStateError: The environment state received is not contained in the observation space.

The atual state value is [ 2.26315055e+00 4.92080649e-01 -3.79569101e-05 -1.70988005e-05] ** FYI, if we comment out those lines, then it's working!

Rowing0914 commented 2 years ago

My test code.

import os
import gym
import robo_gym
from robo_gym.wrappers.exception_handling import ExceptionHandling

target_machine_ip = '127.0.0.1'  # or other machine 'xxx.xxx.xxx.xxx'

# initialize environment
env = gym.make('NoObstacleNavigationMir100Sim-v0', ip=target_machine_ip, gui=False)
env = ExceptionHandling(env)

num_episodes = 1

for episode in range(num_episodes):
    done = False
    env.reset()
    for i in range(10):
        # random step in the environment
        action = env.action_space.sample()
        state, reward, done, info = env.step(action)
        print(i, state, action, reward, done)
Rowing0914 commented 2 years ago

just in case, this is the result of print in my test code.

0 [ 1.34030803e+00 -1.20318812e+00 -1.74082124e-05 -7.84278018e-05] [ 0.9253407 -0.312263 ] -0.2869701039791107 False
1 [ 1.3365632  -1.19848513  0.12905604 -0.15167511] [ 0.06061185 -0.8371541 ] 0.1439434077801735 False
2 [ 1.33320738 -1.17025311  0.07520421 -0.35140991] [-0.07234666 -0.2972456 ] 0.13716978840021568 False
3 [ 1.33320738 -1.17025311 -0.03696184 -0.31456628] [-0.01025574  0.02454386] -0.0038130378909409044 False
4 [ 1.33458648 -1.13710905 -0.03696184 -0.31456628] [ 0.8735261  -0.25257793] -0.33859045229822354 False
5 [ 1.33486152 -1.12828156 -0.00371993 -0.08330178] [0.70487934 0.7882005 ] -0.2488615449624916 False
6 [ 1.33064721 -1.12499685  0.0982857  -0.10691188] [-0.52650326  0.33365813] 0.042754626956311784 False
7 [ 1.33064721 -1.12499685  0.29465324 -0.06335068] [ 0.44721937 -0.7373708 ] -0.15628693521022796 False
8 [ 1.31776456 -1.14046893  0.29465324 -0.06335068] [-0.5014601   0.30409884] 0.48457166764265996 False
9 [ 1.30802781 -1.17436438  0.25853589  0.17118117] [-0.332757  0.676873] 0.3667040188678982 False
Rowing0914 commented 2 years ago

Turns out that i am getting the same error for other envs such as EmptyEnvironmentURSim. ** FYI: The same fix applies!

import os
import gym
import robo_gym
from robo_gym.wrappers.exception_handling import ExceptionHandling

target_machine_ip = '127.0.0.1'  # or other machine 'xxx.xxx.xxx.xxx'

# initialize environment
# env = gym.make('NoObstacleNavigationMir100Sim-v0', ip=target_machine_ip, gui=False)
env = gym.make('ObstacleAvoidanceMir100Sim-v0', ip=target_machine_ip, gui=False)
# env = gym.make('EmptyEnvironmentURSim-v0', ur_model='ur5', ip=target_machine_ip, gui=False)
# env = gym.make('EndEffectorPositioningURSim-v0', ur_model='ur10', ip=target_machine_ip, gui=False)
# env = gym.make('BasicAvoidanceURSim-v0', ur_model='ur5', ip=target_machine_ip, gui=False)
# env = gym.make('AvoidanceIros2021URSim-v0', ur_model='ur5', ip=target_machine_ip, gui=False)
# env = ExceptionHandling(env)

num_episodes = 1

for episode in range(num_episodes):
    done = False
    env.reset()
    for i in range(10):
        # random step in the environment
        action = env.action_space.sample()
        state, reward, done, info = env.step(action)
        print(i, state, action, reward, done)
os.system("killall -9 rosmaster")
matteolucchi commented 2 years ago

Hi @Rowing0914,

Thank you very much for using robo-gym and sorry for the late reply! I only had time now to look into the issue and some changes in the gym package caused the environments to break. It should be fixed now, if you are stil interested please do a fresh installation and try it out and feel free to reopen the issue in case it is still not working.

Cheers,

Matteo