-
In "main_torch_dqn_lunar_lander_2020.py" file
--> self.state_memory[index] = state
It says
"ValueError: setting an array element with a sequence. The requested array would exceed the maximum nu…
-
Use `relu` function instead.
https://github.com/FELICES-David/DQN_Cartpole/blob/5c318fdbf002bc23a199b0782539a45cb3d49c6c/cartpole-DQN_v2.py#L62
You don't need type casting to float64. This will re…
-
I see this TODO in the code for this file:
https://github.com/tensorflow/agents/blob/master/tf_agents/agents/dqn/dqn_agent.py
```
# TODO(oars): Get DQN working with more than one dim in the…
-
List of things to be added:
- [x] Normalization of inputs
- [x] Output of the dqn should be a softmax?
- [x] Check sizes of the network. I would say that the first layer is too small
- [x] Would be …
-
Currently, DQN modules and losses want to know
```
action_space (str, optional): The action space to be considered.
Must be one of
``"one-hot"``, ``"mult_one_hot"``, ``"b…
-
I changed the parameter in examples/dqn.py to this and I get an error:
```
def main():
env_name = 'CartPole-v1'
# env_name = 'PongNoFrameskip-v4'
use_prioritization = True
use_…
jt70 updated
3 months ago
-
我发现老师在训练DQN的时候使用的是self.eval_net.forward(input),为什么这里不用self.eval_net(input)?
-
There might be some shape related errors or we're missing something. Either that or hyperparameters need to be tuned.
-
whenever I try to run the examples I get the following error.
No module named 'dqn'
-
Hi, I use the caffe-dqn as the caffe, and when I do make, I got this.
```
CMakeFiles/dqn.dir/dqn_main.cpp.o: In function `PlayOneEpisode(ALEInterface&, dqn::DQN&, double, bool)':
dqn_main.cpp:(.text+…