Closed jaekyeom closed 5 years ago
No problem.
On my specific machine, yes. Given the 20-core, 60GB-memory, and 2-GPU resource allocation, by switching from ThreadedVecEnv to SubprocVecEnv, fps reported by the openai/baselines code increased from 90 to 130. Also, deepmind_lab/deepmind/level_generation/compile_map.sh
calls now run in parallel, which is huge in my opinion.
But I wouldn't mind discarding this PR and publishing your internal fix, hopefully with other internal changes :)
Thanks for the analysis. I did it on my end as well, and I get similar results (FPS increases from ~120 to ~160, and indeed map compilation is done in parallel). I pushed our internal fix in the handling of the flag, which does not change the effective default behavior. Maybe you could modify your PR to just change the default value of the flag on the current HEAD, to "SubprocVecEnv"?
Except for this, we don't have other internal changes.
Thanks for the info. I modified this PR as you suggested.
Can you sign the CLA as explained in CONTRIBUTING.md, so that I can merge your PR? Thanks!
Thanks, I didn't know my Github username was not in the CLA contact info. I updated my CLA.
Merged.
Thanks for the PR. We actually did a similar fix internally (which we have not yet pushed to this github repo). In your PR, you are effectively changing the default behavior when the flag is not set. Before the PR, we the code was effectively defaulting to ThreadedVecEnv. After, it's defaulting to SubprocVecEnv. I guess you are doing this in order to make training go faster. Do you have some numbers to share that show this?