Closed brehiner closed 1 year ago
You shouldn't have problems with your state-of-the-art GPU. Did you try the different commandline arguments as described here?
If your GPU memories < 10G , you can only use small models。startwith ‘ -smallmodels’
Is there a way to configure the start for this can work with rtx 3060ti?
I'm getting this error:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 12.00 MiB (GPU 0; 8.00 GiB total capacity; 7.30 GiB already allocated; 0 bytes free; 7.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF