-
Hello, to try the DeepQ Learning method there is not any deepq.py , so when the launcher is executed, an error appears and do not work.
Could you please upload that?
Thanks
-
Are those environments compatible with OpenAI baselines implementation?
At first sights, it looks like the agents in `openai/baselines` don't support environments with an observable list.
For e…
-
Hello,
I would like to know what you think about having some standalone implementations as functions that take in the environment and other parameters and return the trained policy.
Here an examp…
-
Hello,
What can we do in that case
Thank You
![Screenshot from 2020-04-25 15-59-38](https://user-images.githubusercontent.com/50038739/80274596-2350a100-870e-11ea-9f49-6d7df4870b01.png)
-
Hello,
Not sure if this repo is active, but I am interested in using your environment for a research project. I have built my own simple DeepQ network to train on the ATC environment. I got it wo…
-
I haven't been able to reproduce the results of the Breakout benchmark with Double DQN when using similar hyperparameter values than the ones presented in the original paper. After more than 20M obser…
-
I can't get started train_minteral_shards.py example. Getting this error:
$ python train_mineral_shards.py
Traceback (most recent call last):
File "train_mineral_shards.py", line 14, in
…
-
Hi guys,
let me first say that I am quite new to tensorflow and particularly Tensorboard. I just started watching some videos and tutorials and I found the possibility to tune the hyper parameters…
-
In DQN's [README](https://github.com/openai/baselines/tree/master/baselines/deepq), it gives a way to download the pretrained model:
```
python -m baselines.deepq.experiments.atari.download_model
…
-
The default hyperparameters of `baselines/baselines/deepq/experiments/run_atari.py`, which presumably is the script we should be using for DQN-based models, fail to gain any noticeable reward for both…