-
After executing the code and Carla starts, it gives me an error... this is the full log
`Traceback (most recent call last):
File "run_RL.py", line 89, in
args.host, args.port)
File "/medi…
-
Hi, I have just tried running Reacher-v1 for 1000000 timesteps with default settings and it didn't learn anything (it just get stuck at -12 test reward), but it looks like you made it running with som…
-
Hello
I am amazed by your work. I am wondering if you tested the Sokoban's game on the standard RL method (Q learning, A2C, ec), and wondering if you have success rate for this kind of game?
dikke updated
3 years ago
-
After running the _train.sh with the default Config.py on a DGX-1 for about an hour I see that the CPU usage stays pretty constant at about 15%, and one GPU is being used at about 40%.
The set…
-
Isn't there any plan on the horizon to port this code to pyTorch ?
-
Hi,
How long does the training process take? I am running the tensorflow cpu version on an i7-4720HQ CPU @ 2.60GHz. The training has been running for a couple of hours now..
~Sandip
-
### What happened + What you expected to happen
I am getting different reward distributions when I use the Python API vs the rllib CLI on a PPO checkpoint. Below is a comparison of reward histograms…
-
I have some questions about pysc2 baseline agent of Deepmind.
https://deepmind.com/blog/deepmind-and-blizzard-open-starcraft-ii-ai-research-environment/
I'm trying to implement baseline agent il…
-
「7_breakout_learning.ipynb」の実行中に次のエラーが発生しました。
なんとなくmultiprocessingに関するエラーかなと思っています。
ちなみにライブラリのバージョンは次のとおりです。
gym==0.17.1
matplotlib==2.2.5
JSAnimation==0.1
pyglet==1.5.0
torch==0.4.1
```
…
-
**Discussion**
I will be moving a lot of the text in the README regarding DataBunches into here to be more constructive/interactive once the basic requirements of the repository are met. Current goal…