google-research / batch_rl

Offline Reinforcement Learning (aka Batch Reinforcement Learning) on Atari 2600 games
https://offline-rl.github.io/
Apache License 2.0
534 stars 75 forks source link

Can i use the Tensorboard to check the intermediate results? #7

Closed weihongwei0586 closed 4 years ago

weihongwei0586 commented 4 years ago

Hello, When I run the demo ' python -um batch_rl.fixed_replay.train \ --base_dir=/tmp/batch_rl \ --replay_dir=$DATA_DIR/Pong/1 \ --gin_files='batch_rl/fixed_replay/configs/dqn.gin' ' The train demo has cost me 2 days and i have not finished it! I want to check the intermediate results using Tensorboard, but it seems not work

agarwl commented 4 years ago

I am not sure why wouldn't the tensorboard work .. are you able to run the tensorboard with online dopamine agents? Also, you can change some of the parameters in the gin file to run the agents much faster for debugging .. I will take a look at this by next week. Training offline Atari agents actually does take 2 days (since dopamine isn't optimized and online agents take 4-5 days!) but you can simply reduce a smaller replay buffer size (the 1% and 10% experiments in the paper which would run much faster than the full dataset experiment).

As an aside, there's a much faster implementation of offline agents with Deepmind's Acme available here now (as a release for RL Unplugged which also contains the DQN replay dataset): https://github.com/deepmind/deepmind-research/blob/master/rl_unplugged/atari_dqn.ipynb

weihongwei0586 commented 4 years ago

I am not sure why wouldn't the tensorboard work .. are you able to run the tensorboard with online dopamine agents? Also, you can change some of the parameters in the gin file to run the agents much faster for debugging .. I will take a look at this by next week. Training offline Atari agents actually does take 2 days (since dopamine isn't optimized and online agents take 4-5 days!) but you can simply reduce a smaller replay buffer size (the 1% and 10% experiments in the paper which would run much faster than the full dataset experiment).

As an aside, there's a much faster implementation of offline agents with Deepmind's Acme available here now (as a release for RL Unplugged which also contains the DQN replay dataset): https://github.com/deepmind/deepmind-research/blob/master/rl_unplugged/atari_dqn.ipynb Thank you very much ~~~~

weihongwei0586 commented 4 years ago

I am not sure why wouldn't the tensorboard work .. are you able to run the tensorboard with online dopamine agents? Also, you can change some of the parameters in the gin file to run the agents much faster for debugging .. I will take a look at this by next week. Training offline Atari agents actually does take 2 days (since dopamine isn't optimized and online agents take 4-5 days!) but you can simply reduce a smaller replay buffer size (the 1% and 10% experiments in the paper which would run much faster than the full dataset experiment).

As an aside, there's a much faster implementation of offline agents with Deepmind's Acme available here now (as a release for RL Unplugged which also contains the DQN replay dataset): https://github.com/deepmind/deepmind-research/blob/master/rl_unplugged/atari_dqn.ipynb

Thank you for your help! I have figure out the problem on tensorboard, the path and port were wrong when i turn on tensorboard! Now, i have get the intermediate results!

agarwl commented 4 years ago

Great, that you figured out the problem -- closing this issue now :)