Open jackwilkinson255 opened 3 years ago
Your error suggests that it doesn't have to do with your ussage of a bullet environment:
ImportError: cannot import name 'compute_advantages'
Can you provide me with some more details: 1) Can you describe the process that you use to install garage? 2) Can you please provide some sort of minimum example to reproduce your error with? Perhaps using one of the bullet environments that isn't custom, that can be imported from a pybullet installation directly. 3) Are you able to run any of the examples under the garage examples directory?
Thanks! @Avnishn
Hi Avnish,
pip install .
, pip says its version 2020.9.0rc2.dev0
pip install garage[bullet]
I think the installation is okay because all of the examples are working with either gym or bullet environments. For example if I replace the environment in the ppo_pendulum.py
torch example to say KukaBulletEnv-v0
or MinitaurBulletEnv-v0
it still runs.
Does the ImportError: cannot import name 'compute_advantages'
give you any indication what might be wrong with the environment?
Thanks
Hi, just to update the same custom pybullet environment works with the ppo_pendulum.py
tensorflow implementation
@jackwilkinson255 It seems that at this point given the information that you've provided me with, that this issue is more nuanced and has to potentially do with your custom environment.
It is likely necessary for you to upload an example of a launcher file that reproduces your issue, that way I can help you more.
The only thing I can really recommend based on the information given is that you uninstall your garage installation, and install via:
pip install -e .['dev','all']
It still seems like this import issue is related to an issue with your installation, but if its an issue relating to the pickled environment, then I'll need some minimum viable example.
If you don't wish to share your custom environment publicly, can you please join our slack community , and message me personally over there?
The thing that stands out the most to me about the traceback you've provided is that unpickling the policy is causing an import of garage.torch
, which imports torch
, which then leads to another import of garage.torch
. This is not what is supposed to happen from that import torch
statement. Have you added garage/src/garage
to your python path? Because that will break everything.
By the way, to simplify debugging you might want to use LocalSampler
instead of RaySampler
.
Hi Guys,
@avnishn installing using pip install -e .['dev','all']
seems to have helped as I can now see the local path to the garage repo when I do pip list
. Think there were some conflicts before.
@krzentner So changing from RaySampler
to LocalSampler
seems to have removed the compute_advantages
error and its now training. However, now when I try to run the trained policy I am still receiving the same error.
This is the code I am using for playing the learned policy:
import os, inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
expdir = currentdir + '/data/local/experiment/ppo_spotmicro_25/'
import gym
from garage.envs.bullet import BulletEnv
import garage.envs.spotmicro_env
import tensorflow as tf
# Load the policy
from garage.experiment import Snapshotter
snapshotter = Snapshotter()
with tf.compat.v1.Session(): # optional, only for TensorFlow
data = snapshotter.load(expdir)
data = snapshotter.load(expdir)
policy = data['algo'].policy
env = gym.make('SpotMicroEnv-v2',
render=True,
)
done = False
obs = env.reset() # The initial observation
policy.reset()
while(1):
action = policy.get_action(obs)
obs, rew, done, _ = env.step(action[0])
# env.render() # Render the environment to see what's going on (optional)
env.close()
Seems to break when the snapshotter loads: data = snapshotter.load(expdir)
and I get the message:
File "run_policy_bullet.py", line 24, in <module>
data = snapshotter.load(expdir)
File "/home/jack/repos/garage/src/garage/experiment/snapshotter.py", line 170, in load
return cloudpickle.load(file)
File "/home/jack/repos/garage/src/garage/torch/__init__.py", line 3, in <module>
from garage.torch._functions import (compute_advantages, dict_np_to_torch,
File "/home/jack/repos/garage/src/garage/torch/_functions.py", line 16, in <module>
import torch
File "/home/jack/repos/garage/src/garage/torch/__init__.py", line 3, in <module>
from garage.torch._functions import (compute_advantages, dict_np_to_torch,
ImportError: cannot import name 'compute_advantages'
The backtrace still looks very strange. import torch
should not cause garage.torch
to be imported, as the backtrace above shows. Can you print out sys.path
?
Hi,
sys.path gives:
['', '/home/jack/repos/motion_imitation', '/opt/ros/noetic/lib/python3/dist-packages', '/home/jack/anaconda3/envs/garage2/lib/python36.zip', '/home/jack/anaconda3/envs/garage2/lib/python3.6', '/home/jack/anaconda3/envs/garage2/lib/python3.6/lib-dynload', '/home/jack/.local/lib/python3.6/site-packages', '/home/jack/anaconda3/envs/garage2/lib/python3.6/site-packages', '/home/jack/PycharmProjects/garage/src']
What directory is the script you're running this from located in? I don't understand how the above sys.path
could cause that backtrace. /home/jack/PycharmProjects/garage/src
only contains the directory garage
, right?
Hi All,
I am trying to run PPO on a custom PyBullet environment (very similar to MinitaurBulletEnv-v0) and am receiving an error when obtaining the first samples:
I have tried using the
BulletEnv
andGymEnv
wrappers but both give me the same error. Can you give me an idea what's wrong with it, if it's to do with how the environment is pickled?Thanks,
Jack