-
I set up the environment exactly as in the ReadMe file but when i try to run the script ddqn.py it gives me this error
"Unable to load mono library from /home/abuzekry/Documents/Code/donkey_rl/donke…
-
Thanks god I have seen this repo and found the one working on this idea.
One of my thoughts is that:
the KAN seems like good at the fitting of continuous functions, but DQN, DDQN are discrete action…
yuzej updated
3 months ago
-
Hello,
Would it be possible to remove all the files that are already in the l2rpn-baselines github.
This would look like:
```
DDQN_NN.py
Geirina.py
__init__.py
action_to_index_jd1.npy
all_ac…
-
Running python game_render.py, we facing the Traceback
Traceback (most recent call last):
File "game_render.py", line 145, in
env.play(agent)
File "pvz_rl\agents\ddqn_agent.py", line 39…
-
In "main_torch_dqn_lunar_lander_2020.py" file
--> self.state_memory[index] = state
It says
"ValueError: setting an array element with a sequence. The requested array would exceed the maximum nu…
-
你好,我尝试在训练cartpole游戏的时候,将DQN的输入改为84x84的图像,action始终都会趋向只有一个方向的问题,请教下有这方面的建议吗?
网络设计:conv2d + conv2d + conv2d + fc
reward:使用默认的1结束时为0 和 theta / (1 - thetaThreshold)两种计算方式都尝试过
Q值:dqn和ddqn都试过
-
on reloading the model performs very poorly as compared to training
-
Dear Makrout,
I thank you for sharing your paper code.
I want to use a hybrid DDQN-DDPG method for my paper similar to your paper.
Unfortunately, when I run main.py, I got several errors on importe…
-
Implement the general SARSA algorithm according to the definition of Barto and Sutton
-
Firstly, thanks for the great collection of code and articles. The articles were very useful in understanding DQN and implementing it.
However, my code is very bad in learning. I am not sure what …