-
When you run an agent-env-seed combination, either using the Docker image or straight Python, no results are written to disk in the `results.csv` output file. I let the code run overnight on my cluste…
-
Hi, I was thinking of incorporating the action (in addition to state) as a state-action pair input into the [rainbow dqn](https://nbviewer.jupyter.org/github/Curt-Park/rainbow-is-all-you-need/blob/mas…
-
Hi I'm looking through the [code ](https://github.com/medipixel/rl_algorithms/tree/master/rl_algorithms/dqn)for Rainbow DQN and didn't find any code which 'turns off' the noisy layer during test perio…
-
Hello,
Thanks for this amazing work.
I think there is a bug in Rainbow algorithm as the exploration is 0!
-
**tl;dr**: an error occurs in [08.rainbow.ipynb](https://colab.research.google.com/github/Curt-Park/rainbow-is-all-you-need/blob/master/08.rainbow.ipynb) while training to 100,000 steps (sometimes < 2…
-
I created a custom layer, which looks as follows:
```
class NoisyLayer(keras.layers.Layer):
def __init__(self, in_shape=(1,2592), out_units=256, activation=tf.identity):
super(Noi…
-
Hi cxxgtxy,
Is there any NoisyNet implementation in the Rainbow baseline?
Doy you have any recommanded implementation of efficient NoisyNet?
Thank you for your help.
-
A new general method for exploration from DeepMind.
https://arxiv.org/pdf/1706.10295.pdf
-
The introduction of *NoisyNets* appears to add significant explorative capability to DQN & A3C, in some cases making progress on tasks that had otherwise exhibited little advancement.
I wasn't sure…
-
There is a parameter ```--evaluation-episodes``` but in the current implementation, like we are always acting greedly, all the episodes are going to be exactly the same. I think that to get a better t…