PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).
For these two lines, https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail/blob/62e8d5896db155839056deb0fe60e0c05db0bf16/a2c_ppo_acktr/envs.py#L234-L235
should be
self.stacked_obs[:, :-self.shape_dim0] = self.stacked_obs[:, self.shape_dim0:].clone()
because there's a PyTorch bug here.