berkeleydeeprlcourse / homework_fall2022

Assignments for Berkeley CS 285: Deep Reinforcement Learning (Fall 2022)
114 stars 157 forks source link

Vectorizing env #2

Closed bri25yu closed 2 years ago

bri25yu commented 2 years ago

With the LunarLander and HalfCheetah envs requiring single iteration batch sizes of > 10k for convergence, these runs take quite a long time. Gym allows multiprocessing for multiple envs at the same time in the form of the VectorEnv base class. Here's my proposal for an implementation of a vectorized environment to speed up batch sizes.

I would make a PR but the code in this repo contains many unfilled blanks (TODOs), some of which would need to be implemented in order to properly create a PR. Thus, I'm making an issue with the specified commits here.

An example of a test you can do is here. We run on the cartpole environment for 10 iterations with a batch size of 10000. We compare the single env and the vectorized env cases where we use 10 envs in the vectorized case.

Obviously the staff implementation and my implementation are different, but here are my outputs for the above test

# Without vectorization
real    1m17.553s
user    1m16.253s
sys 0m1.270s

# With vectorization, # of envs = 10
real    0m13.258s
user    0m13.899s
sys 0m2.867s

We achieve a speedup of ~6x. Let's say my LunarLander experiments take 2 hours to run. This would in theory reduce the time it takes to run the same experiment to 20 min, much more tractable, especially for students who typically only have access to CPU or colab GPUs.

bri25yu commented 2 years ago

Fixing an issue using https://github.com/bri25yu/homework_fall2022/commit/acaf93b64e70798fddacd062457c80e0958c3c71