-
Hi,大佬
@Crawford-fang
非常感谢大佬可以公开自己的开源代码,我对于深度强化学习训练机器人非常感兴趣,我下载了您的代码,但是运行出现错误,如下:
`/home/he/miniconda3/envs/rostorch/bin/python3.7 /home/he/turtlebot_ws/src/ROS_pytorch_RL/DQN/DQN2.py
Traceback (m…
-
Hi,
while working on a PyTorch DQN agent for BSuite experiments, I noticed quite bad results on the mnist and mountain car experiments. I see that a similar question was addressed [here](https://gi…
-
## 🚀 Feature
There seems to be fair few inefficiencies in the RL model code.
In both the VPG and DQN code, the network is computed twice, once to generate the trajectory and then once again in the…
-
I'm getting an unexpected, but reliable, drop in GPU speed after running 11-ish games of Chapter07/01_dqn_basic.py using the --cuda option. For the first 10 games, I get speeds comparable with the tex…
-
When I ran DQN Server on CIFAR-10 dataset (In cifar-10.json.template, only changed server to:, "server": "dqn")
Complete Error as follows:
.
.
.
[INFO][21:32:06]: Training on client #98
[INFO]…
-
### Proposal
To encourage the use of Gymnasium and build up the RL community, I would propose that a large range of tutorials are created.
This is a list of tutorials that could be made
- [x…
-
### 🐛 Bug
I tried to run the RL algorithms following the documentations and I tried the plotting utilities. However, I am facing the errors for having empty algo_scores in the plot_from_file.py funct…
-
Excuse me, I have some questions:
First, I see that you are using PyTorch and what version of PyTorch framework you are using.
Second, compared with the program of DQN, does this DDPG use different …
-
python benchmark.py
Calculating device: cpu
Deep learning toolbox: PyTorch.
Algorithm: DQN
Environment: Atari
Scenario: Breakout-v4
A.L.E: Arcade Learning Environment (version 0.7.5+db37282)
[P…
-
could you share the dependency for this repo?