I'm developing on a laptop...so my GPU is under specs.
I wonder if there is a way to prevent out of memory errors, event if that sacrifices results. I'm now interested it running it and registering the output and so on and less in getting the best results possible.
Error I'm getting is:
RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 5.94 GiB total capacity; 3.90 GiB already allocated; 457.88 MiB free; 4.86 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try
Volume is a 200x200x200. Runinng without sequence.
I'm developing on a laptop...so my GPU is under specs.
I wonder if there is a way to prevent out of memory errors, event if that sacrifices results. I'm now interested it running it and registering the output and so on and less in getting the best results possible.
Error I'm getting is:
RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 5.94 GiB total capacity; 3.90 GiB already allocated; 457.88 MiB free; 4.86 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try
Volume is a 200x200x200. Runinng without sequence.