allocates memory and releases it immediately. This causes a CUDA out of memory error if you can't store double the amount of samples that memory_capacity is set to.
Changed to use torch.as_strided instead which does no allocation.
Also due to improved bounds checking in as_strided the change uncovered an error where the second dimension of states_view should be num_steps - (self.history - 1) instead of num_steps.
Using PyTorch 1.5 on CUDA 10.2 this call in
memory.py
allocates memory and releases it immediately. This causes a CUDA out of memory error if you can't store double the amount of samples that
memory_capacity
is set to.Changed to use torch.as_strided instead which does no allocation.
Also due to improved bounds checking in
as_strided
the change uncovered an error where the second dimension ofstates_view
should benum_steps - (self.history - 1)
instead ofnum_steps
.Same applies to
frame_view
andreward_view
.