It appears the implementations here currently don't support using Gym environments with Discrete action spaces. For example, the following code produces an error:
$ python -c 'from hbaselines.algorithms import RLAlgorithm; from hbaselines.fcnet.sac import FeedForwardPolicy; alg=RLAlgorithm(policy=FeedForwardPolicy, env="CartPole-v0", total_steps=1000000)'
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
* https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.
pygame 2.0.1 (SDL 2.0.14, Python 3.7.10)
Hello from the pygame community. https://www.pygame.org/contribute.html
2021-09-16 13:25:32.015754: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2021-09-16 13:25:32.031343: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2599880000 Hz
2021-09-16 13:25:32.031693: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x56147bad4370 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-09-16 13:25:32.031734: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/jlilly7/Code/h-baselines-d/hbaselines/algorithms/rl_algorithm.py", line 581, in __init__
self.trainable_vars = self.setup_model()
File "/home/jlilly7/Code/h-baselines-d/hbaselines/algorithms/rl_algorithm.py", line 689, in setup_model
**self.policy_kwargs
File "/home/jlilly7/Code/h-baselines-d/hbaselines/fcnet/sac.py", line 205, in __init__
self._ac_means = 0.5 * (ac_space.high + ac_space.low)
AttributeError: 'Discrete' object has no attribute 'high'
Given how broadly applicable discrete action spaces are, it would be good for this repo to support them. Alternatively, if I've misunderstood and done something wrong, please let me know.
I'm currently attempting to make adjustments for discrete environments in a fork; possibly a pull request could come out of this if it works well, but no promises yet.
It appears the implementations here currently don't support using Gym environments with Discrete action spaces. For example, the following code produces an error:
Given how broadly applicable discrete action spaces are, it would be good for this repo to support them. Alternatively, if I've misunderstood and done something wrong, please let me know.
I'm currently attempting to make adjustments for discrete environments in a fork; possibly a pull request could come out of this if it works well, but no promises yet.