Closed wilfrediscoming closed 10 months ago
Hi, @wilfrediscoming, can you tell me your machine's specifications? GPU memory size any everything.
use option --jit e case you get a VRAM error, use these already compiled models
add in config.yaml
base-cuda-pt: url: "https://drive.google.com/file/d/1-3zbdYKuXlHlh-cj1UJfem8mcRKBlP1d/view?usp=drive_link" md5: NULL ckpt_name: "ckpt_base_cuda.pt" http_proxy: NULL base_size: [1024, 1024]
fast-cuda-pt: url: "https://drive.google.com/file/d/1OlruRBHEFjRUoM5PwdnWnnAAuLXG35fU/view?usp=drive_link" md5: NULL ckpt_name: "ckpt_fast_cuda.pt" http_proxy: NULL base_size: [384, 384]
base-nightly-cuda-pt: url: "https://drive.google.com/file/d/1-UkSRe8RpT1iblo6orbklEzX5BnBOY15/view?usp=drive_link" md5: NULL ckpt_name: "ckpt_base_nightly_cuda.pt" http_proxy: NULL base_size: [1024, 1024]
Hi, @wilfrediscoming, can you tell me your machine's specifications? GPU memory size any everything.
I am using a T4 GPU.. but I am using other stable diffusion model too, so the GPU usage is tight
and I found that when I upload an (large) image for remove background, it will pops cuda out of memory
so I am wondering is there any way to avoid this
use option --jit e case you get a VRAM error, use these already compiled models
add in config.yaml
base-cuda-pt: url: "https://drive.google.com/file/d/1-3zbdYKuXlHlh-cj1UJfem8mcRKBlP1d/view?usp=drive_link" md5: NULL ckpt_name: "ckpt_base_cuda.pt" http_proxy: NULL base_size: [1024, 1024]
fast-cuda-pt: url: "https://drive.google.com/file/d/1OlruRBHEFjRUoM5PwdnWnnAAuLXG35fU/view?usp=drive_link" md5: NULL ckpt_name: "ckpt_fast_cuda.pt" http_proxy: NULL base_size: [384, 384]
base-nightly-cuda-pt: url: "https://drive.google.com/file/d/1-UkSRe8RpT1iblo6orbklEzX5BnBOY15/view?usp=drive_link" md5: NULL ckpt_name: "ckpt_base_nightly_cuda.pt" http_proxy: NULL base_size: [1024, 1024]
this jit way doesn't work
ValueError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/transparent_background/Remover.py in init(self, mode, jit, device, ckpt, fast) 95 try: ---> 96 traced_model = torch.jit.load( 97 os.path.join(ckpt_dir, ckpt_name), map_location=self.device
26 frames ValueError: The provided filename /root/.transparent-background/ckpt_base_cuda:0.pt does not exist
During handling of the above exception, another exception occurred:
OutOfMemoryError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias) 454 weight, bias, self.stride, 455 _pair(0), self.dilation, self.groups) --> 456 return F.conv2d(input, weight, bias, self.stride, 457 self.padding, self.dilation, self.groups) 458
OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacty of 14.75 GiB of which 18.81 MiB is free. Process 2097 has 14.73 GiB memory in use. Of the allocated memory 13.29 GiB is allocated by PyTorch, and 190.99 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF`
generated name is wrong ckpt_base_cuda:0.pt --> ckpt_base_cuda.pt
Closing due to inactivity.
I sometimes get CUDA out of memory when processing larger images. Is there any way to reduce the Cuda usage when using transparent_background?
or I should simply reduce the image size before using remove()