ali-vilab / AnyDoor

Official implementations for paper: Anydoor: zero-shot object-level image customization
https://ali-vilab.github.io/AnyDoor-Page/
MIT License
3.93k stars 358 forks source link

HOW Many GPU FOR TEST DEMO #32

Open anthonyyuan opened 9 months ago

anthonyyuan commented 9 months ago

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 58.00 MiB. GPU 0 has a total capacty of 15.70 GiB of which 53.31 MiB is free. Including non-PyTorch memory, this process has 15.07 GiB memory in use. Of the allocated memory 14.65 GiB is allocated by PyTorch, and 196.92 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

himmetozcan commented 8 months ago

I also couldn't test it with a 16gb vram on colab.

sohaibhaider commented 8 months ago

Use "Pruned model - 4.57 GB" I am running it on a GTX 1070 8GB VRAM GPU (utilizing 2GB from Shared Memory - Total 10GB)

Eddudos commented 8 months ago

While inferencing on A10 GPU it loaded like ~18 GB of VRAM and ~15 GB of RAM.

luccachiang commented 8 months ago

3090 24G could run the inference script

Gooddz1 commented 4 months ago

Use "Pruned model - 4.57 GB" I am running it on a GTX 1070 8GB VRAM GPU (utilizing 2GB from Shared Memory - Total 10GB)

How do I need to modify the configuration file to use Anytool's pruned version of the model when downloading it?