-
## 🚀 Feature
Support parallelized/asynchronous execution of ops on CPU.
PyTorch currently supports [asynchronous execution on GPU](https://pytorch.org/docs/master/notes/cuda.html#asynchronous-e…
-
Greetings!
It's probably a small issue in most image-CNN-related cases, but when dealing with text, multi-input NNs, reinforcement learning or long-term memory networks some layers should be applied …
-
I am using reinforcement learning for mathematical optimization, using PPO2 agent in google colab.
In case of my custom environment, episode rewards are remaining zero when I saw the tensorboard. Als…
-
Is it possible to manually control when and for how long the simulator steps? Basically I would like to use AirSim similar to [OpenAI Gym environments ](https://github.com/openai/gym) in which I get t…
-
Hello,
I've tried in vain to find suitable hyperparameters for SAC in order to solve MountainCarContinuous-v0.
Even with hyperparameter tuning (see "add-trpo" branch of [rl baselines zoo](https:…
-
*please fill this in*
-
Hi @EndingCredits,
this is really cool that you got the `NEC` working :+1:
Have you tried to run your code on the Atari environments, in Open AI gym?
I tried to train on `Pong`, but I got th…
ghost updated
7 years ago
-
### What happened + What you expected to happen
Hi, I recently tried to recreate the experiments from the original PPO paper. First I used Stable Baselines3 to do so and noticed that the reward gen…
-
Hi,
first thank you for your repo!
I just want to ask you, is there a source from where I can understand more about the architecture of your work, besides the mentioned paper?
I mean, maybe yo…
unnir updated
5 years ago
-
### Question
I'm looking for a solution to this error.
`
[INFO]: Base environment:
Environment device : cuda:0
Physics step-size : 0.005
Rendering step-size : 0.02
Environment …