-
Hello, thanks for all the cool implementations.
I was specifically interested in the MoG-DQN, however, running your implementation, it seems that it does not manage to learn the simplest CartPole …
-
请问 我使用helloworld——DQN文件训练完成后 如何保存训练完成的神经网络参数以及在测试中对其进行使用。以及我应该如何设置自己想要的环境而不是gym现成的环境。
-
### What happened + What you expected to happen
I tried the getting started commands at https://docs.ray.io/en/latest/rllib/rllib-training.html
With `pip install tensorflow[and-cuda]` followed by …
-
CI test **linux://rllib:learning_tests_cartpole_dqn_gpu** is flaky. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/5482#0190c1b7-f900-44e3-8ce0-e81beaf3d145
DataCaseName-linu…
-
CI test **linux://rllib:learning_tests_multi_agent_cartpole_dqn_gpu** is flaky. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/5494#0190c314-728f-42a8-a960-af20a90ba259
DataC…
-
Trying to debug larger width environments (7 currently).
Things to try:
1. Different metric (Average Q-value from 2015 paper https://arxiv.org/pdf/1312.5602.pdf).
```
5.1 Training and Sta…
-
I am getting the following TypeError:
File ~/Desktop/mushroom-rl-master/examples/minigrid_dqn.py:274 in experiment
mdp = MiniGrid(args.name, history_length=args.history_length)
TypeError: …
-
Hello, I'd like to understand how to use the "Actor_MIP" class in the provided code. This part is mentioned as a highlight in your paper, but it seems that the class is not called or utilized in the c…
-
Hey Kevin, I am facing the following error when running DQN on the heavenhell environment.
Error log:
```bash
$ python run.py --env POMDP-heavenhell_3-episodic-v0 --inembed 64 --model DQN --ve…
-
Hi,
I'm facing a _KeyError: 'email'_ error when running the following command, i.e., pushing to the hub on the Colab for Unit 3. I'm not so sure what triggered it, all the previous hands-on were r…