avisingh599 / reward-learning-rl

[RSS 2019] End-to-End Robotic Reinforcement Learning without Reward Engineering
https://sites.google.com/view/reward-learning-rl/
Other
367 stars 68 forks source link

Incompatibility issue with tensorflow version 2 #47

Open ArgyChris opened 3 years ago

ArgyChris commented 3 years ago

Hello,

Thank you for sharing the code for the paper. I would like to point that it seems that there is an incompatibility issue of the code with the tensorflow version. More particularly I have a latest model of GPU, so I have to use a latest version of tensorflow (version>=2). However, running a simple example gives the following error:

softlearning run_example_local examples.classifier_rl --n_goal_examples 1 --task=Image48SawyerDoorPullHookEnv-v0 --algorithm VICERAQ --num-samples 1 --n_epochs 1 --active_query_frequency 1

Traceback (most recent call last):
  File "/home/argyrioschristodoul/anaconda3/envs/softlearning/bin/softlearning", line 33, in <module>
    sys.exit(load_entry_point('softlearning', 'console_scripts', 'softlearning')())
  File "/home/argyrioschristodoul/Projects/reward-learning-rl/softlearning/scripts/console_scripts.py", line 202, in main
    return cli()
  File "/home/argyrioschristodoul/anaconda3/envs/softlearning/lib/python3.6/site-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/home/argyrioschristodoul/anaconda3/envs/softlearning/lib/python3.6/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/home/argyrioschristodoul/anaconda3/envs/softlearning/lib/python3.6/site-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/argyrioschristodoul/anaconda3/envs/softlearning/lib/python3.6/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/argyrioschristodoul/anaconda3/envs/softlearning/lib/python3.6/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/home/argyrioschristodoul/Projects/reward-learning-rl/softlearning/scripts/console_scripts.py", line 71, in run_example_local_cmd
    return run_example_local(example_module_name, example_argv)
  File "/home/argyrioschristodoul/Projects/reward-learning-rl/examples/instrument.py", line 208, in run_example_local
    example_args = example_module.get_parser().parse_args(example_argv)
  File "/home/argyrioschristodoul/Projects/reward-learning-rl/examples/classifier_rl/__init__.py", line 26, in get_parser
    from .utils import get_parser
  File "/home/argyrioschristodoul/Projects/reward-learning-rl/examples/classifier_rl/utils.py", line 7, in <module>
    import softlearning.algorithms.utils as alg_utils
  File "/home/argyrioschristodoul/Projects/reward-learning-rl/softlearning/algorithms/__init__.py", line 1, in <module>
    from .sql import SQL
  File "/home/argyrioschristodoul/Projects/reward-learning-rl/softlearning/algorithms/sql.py", line 9, in <module>
    from .rl_algorithm import RLAlgorithm
  File "/home/argyrioschristodoul/Projects/reward-learning-rl/softlearning/algorithms/rl_algorithm.py", line 16, in <module>
    class RLAlgorithm(tf.contrib.checkpoint.Checkpointable):
AttributeError: module 'tensorflow' has no attribute 'contrib'

The code supports tensorflow version 1.13.0. I managed to run the previous example with my CPU, however when I use it with a tf-nightly-gpu (2.7.0-dev20210806) version then I get the previous runtime error.

Thank you