Open mathigatti opened 1 week ago
@mathigatti Thanks for the tip!
Did you able to run 2B in free T4? I got an error:
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 34.94 GiB. GPU 0 has a total capacity of 14.75 GiB of which 5.13 GiB is free. Process 17472 has 9.61 GiB memory in use. Of the allocated memory 7.19 GiB is allocated by PyTorch, and 2.29 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
I got this error, it was fixed by updating gradio though, I'm sharing it in case it's useful for someone
Fix command:
pip install -U gradio==4.43.0
Error message