Kaixhin / Rainbow

Rainbow: Combining Improvements in Deep Reinforcement Learning
MIT License
1.59k stars 284 forks source link

The problem about training with GPU #78

Closed Hugh-Cai closed 4 years ago

Hugh-Cai commented 4 years ago

Hi, Kai! I meet the problem that when I trained the Rainbow agent, my conputer's CPU utilization is very high although I used cuda for training. My computer configuration is i7-9700K and RTX3080. How can I speed up the training process? How to use GPU to spped up the training fot network without the limitation of CPU freque? And another problem is that I tried to save the model and continue training, but it doesn't work. Even if the saved models and memory are loaded, the training process still started from scratch.

Kaixhin commented 4 years ago

There will still be a lot of CPU utilisation for various bits of code - for example, the environment does not run on GPU, and the replay memory is generally too large to be stored on GPU.

The codebase provides functionality for saving and loading models and memories (e.g. for use in offline RL), but does not provide "resume training" functionality. I won't be building this but PRs are welcome.