Closed n17s closed 3 years ago
Yes, it is the same as original gin config in dopamine and also present here. The agents in LoggedDQNAgent
in this repo here were used to collect the data. You can just use the run_experiment.py
script to launch these experiments.
Regarding the online training replication, Acme might have a faster version of training online agents too (not sure how fast it is since the bottleneck is online data collection from Atari environments). RL Unplugged creates TFRecord version of the offline Atari dataset, so it only focuses on the offline part. That said, logging experiments would take about 4-5 days per game run for online Atari with Dopamine.
This is very helpful! Thanks!
I'd like to retrain the online DQN agent in order to log some additional data during online training. The README says
However, this is not enough information to accurately replicate the setup. Could you share the gin file that was used? Is it the same as the one in dopamine?
Also, is there a faster way to accurately replicate the online training and logging via the RL unplugged project, or is it just the offline part that has become faster?