Closed hsezhiyan closed 4 years ago
Hi @hsezhiyan,
TPU training is currently not supported. If you implement it, please send us a pull request.
Hi @hsezhiyan,
To close this bug:
(1) This project is much more likely to benefit from simpler changes than switching to TPU if you want to speed up training. For instance, running more actors on the same machine than we did, or running actors on many machines in a distributed fashion is likely to provide good training speedups.
(2) Fully harnessing Cloud TPUs for RL training is multiple levels more challenging than (1). See https://github.com/google-research/seed_rl for a distributed RL framework that will soon run on Cloud TPUs.
Does this codebase currently support training on a TPU? If so, how would I train on a TPU?