Open PolarisYxh opened 5 years ago
You need to provide more information for us to be able to reproduce your error. How did you install the package? What is the exact code version you ran? What is the exact command you ran?
I faced the same problem a few days ago. It seems to be an error coming from the gym package. As a temporary fix, until the issue is resolved, you could try to revert your gym to the commit "cc6ff414aefe669cc8d221a482ebe211816f60fe" URL to commit
I found that if your gym is not the latest 0.14 version, there should be env._entry_point instead of env.entry_point . The reason is here--https://github.com/openai/gym/commit/dc91f434f83f3ad612ff353cbf2c1afc4788896b#diff-3dff15d44236dd4f0f823a6ddb7e8c9b . You could modify two 'env.entry_point' in baselines/run.py back into 'env._entry_point' to solve this problem. By the way, I guess update gym to the latest version may also help.
I am getting the error too:
2019-10-16 15:23:50.846600: E tensorflow/core/platform/hadoop/hadoop_file_system.cc:132] HadoopFileSystem load error: dlopen(libhdfs.dylib, 6): image not found
Traceback (most recent call last):
File "/Users/ryanr/anaconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/Users/ryanr/anaconda3/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/ryanr/B.Eng/MCAST_Degree_4/Thesis/baselines/baselines/run.py", line 34, in <module>
env_type = env._entry_point.split(':')[0].split('.')[-1]
AttributeError: 'EnvSpec' object has no attribute '_entry_point'
Not quite sure what the HadoopFileSystem load error
is.
pip install --upgrade gym
Please post the versions of baselines and version of gym that you are using.
On the tf2 branch, gym is locked into earlier gym version. I updated gym to 0.15.4 and received this error instead:
(baselines-tf2) umeboshi@bard:~/workspace/others/openai/baselines-tf2$ python -m baselines.run --alg=deepq --env=BreakoutNoFrameskip-v0 --num_timesteps=1e5 --save_path=breakout-dqn.pkl --load_path=breakout-dqn.pkl
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/freespace/home/umeboshi/workspace/others/openai/baselines-tf2/baselines/run.py", line 12, in <module>
from baselines.common.cmd_util import common_arg_parser, parse_unknown_args, make_vec_env, make_env
File "/freespace/home/umeboshi/workspace/others/openai/baselines-tf2/baselines/common/cmd_util.py", line 12, in <module>
from gym.wrappers import FlattenDictWrapper
ImportError: cannot import name 'FlattenDictWrapper' from 'gym.wrappers' (/freespace/home/umeboshi/.virtualenvs/baselines-tf2/lib/python3.7/site-packages/gym/wrappers/__init__.py)
I corrected the above error by using #1051 as an exmple and replaced instances of FlattenDictWrapper with FlattenObservation.
Come on, why on earth openai gym doesn't follow any (semantic) versioning scheme and so many incompatible changes with minor releases? 0.15.4 -> 0.15.6 has changed and broke so many APIs including EnvSpec.tags
, EnvSpec._entry_point
, and some public APIs like FlattenDictWrapper
, without any warning or deprecation. Dear OpenAI, please take it seriously.
I found that install 0.14 of gym solved this issue:
pip install gym==0.14
/home/yxh/anaconda3/envs/tensorenv/lib/python3.6/site-packages/baselines/baselines/run.py 2 places:env._entry_point.split(':')[0].split('.')[-1] change to env.entry_point.split(':')[0].split('.')[-1]