Closed bingykang closed 3 years ago
According to the script https://github.com/google-research/seed_rl/blob/master/gcp/train_atari.sh#L32-L39, it requires two NVIDIA_TESLA_P100 gpu.
But in this script https://github.com/google-research/seed_rl/blob/master/common/utils.py#L99, when there are not TPU devices, it will adopt OneDeviceStrategy for learning.
So, this gcp/train_atari.sh script is actually the same as running seed rl in a local manner?
I believe you're right about that. E.g. see the comment just before:
One could arguably use the two GPUs efficiently instead of wasting one.
According to the script https://github.com/google-research/seed_rl/blob/master/gcp/train_atari.sh#L32-L39, it requires two NVIDIA_TESLA_P100 gpu.
But in this script https://github.com/google-research/seed_rl/blob/master/common/utils.py#L99, when there are not TPU devices, it will adopt OneDeviceStrategy for learning.
So, this gcp/train_atari.sh script is actually the same as running seed rl in a local manner?