-
Hello buddy, why does DDPG work so poorly on InvertedPendulum-v4, do you have any good suggestions。
200 rounds of training, and I still kept falling to the floor during the final demonstration
-
**1 Describe the bug**
If we run gymnasium environments that do not belong to classic control, then it always throws an error. The errors for each environment are unique. This could be because differ…
-
Value Iteration With Frozen Lake does not work.
1. It run into failure: env = gym.make('FrozenLake-v0'). It says to use v1 instead of v0.
2. Done. But when running last code, it says:
/opt/cond…
-
> FrozenLake-v0 defines "solving" as getting average reward of 0.78 over 100 consecutive trials.
But my results are around 70-75%: https://github.com/jinzishuai/learn2deeplearn/blob/master/learnRL/…
-
Thanks for sharing these methods! I'm keen to use this repo as a benchmark in an MSc project.
I'm having trouble running the 'example usage' code given in the readme (Exact Maximum Entropy IRL on F…
-
I am attempting to access the Atari environments, and upon importing the latest versions of ale-py, autorom, gym, gymnasium even, I get the following error when attempting to make and environment of a…
-
I'm trying to execute this simple code
```
import gym
from genrl.agents import QLearning
from genrl.trainers import ClassicalTrainer
env = gym.make("FrozenLake-v0")
agent = QLearning(env…
-
I am trying to use the code as an example. Well, it is a little bit strange, when I changed the frozen lake env to the deterministic version, e.g. `env = gym.make("FrozenLake=v0, is_slippery=False)`, …
-
### Question
``Hello everyone, I anticipate that I'm a beginner but I'm having this problem since a while and I feel frustrated...
I'm trying to build a custom envorinment (in Colab) for some univer…
-
_From @RobAltena on August 25, 2017 1:12_
#### Issue Description
Trying to generate the `FrozenLake-v0` environment in rl4j:
```
public static void main(String[] args){
GymEnv mdp = new G…