huangwl18 / modular-rl

[ICML 2020] PyTorch Code for "One Policy to Control Them All: Shared Modular Policies for Agent-Agnostic Control"
https://huangwl18.github.io/modular-rl/
Other
217 stars 34 forks source link

Multi-CPU parallel training #4

Open Taylor-Liu opened 4 years ago

Taylor-Liu commented 4 years ago

Hi, Wenlong. Thanks for sharing your code.

When I reproduced your code, I found that only one CPU was used for training. The training speed was a bit slow. Can your code train with multiple CPUs in parallel? I can't find the corresponding option in your code configuration.

huangwl18 commented 4 years ago

Thanks for the interest in the paper! The code is not configured for multi-CPU training. Only vectorized environments (from OpenAI baselines) and torchfold (dynamic batching) are currently used for speedup in training.