Closed truedat101 closed 6 months ago
For those with smaller GPUs (under 16GB VRAM) , try adjusting the config yml to use a smaller Height and Width. 512 is too ambitious. Start at 64 and step up from there. 64 is too small actually to be useful (video is postage stamp sized and missing observable details). 128 seems like a good starting point.
I'll close this. I think anyone finding the project can review the issue and determine a good fix. Ideally the README has some sizing guides (maybe it is there and I missed it).
Running inference on ubutnu 22.04 with an NVIDIA 3080 (12GB), getting:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.25 GiB. GPU 0 has a total capacity of 11.75 GiB of which 1.08 GiB is free. Including non-PyTorch memory, this process has 9.95 GiB memory in use. Of the allocated memory 9.51 GiB is allocated by PyTorch, and 139.25 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Looking for ways to tune configuration so OOM does not happen.