Closed fedyfausto closed 10 months ago
Please update Fooocus to the latest version (yours: 2.1.703, latest: 2.1.862, including optimisations for VRAM usage) as you're still using a version with fcbh. After updating, please ensure you've enabled swap for your system, also see https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md#system-swap
I'd recommend not to use --low-vram
(also not the new flag: --always-low-vram
) as Fooocus automatically switches to low vram mode if low ressurce availability is detected.
Please provide your feedback after updating Fooocus, checking swap and removing --low-vram
.
The swap is active:
swapon -s
Filename Type Size Used Priority
/swap.img file 8388604 0 -2
I checked out the main release and added the --attention-split
option and it works, but why?
Splitting attention reduces VRAM, which makes it possible for you to run Fooocus. Please also check https://vaclavkosar.com/ml/cross-attention-in-transformer-architecture for further information how attention works in SD.
I assume there still is a misconfiguration on the system so swap isn't (effectively?) used or used at all for Fooocus as this behavior has only been reported by you with the latest version and it's working on Colab or other cloud providers running Linux. Happy you found a working solution, closing this issue now. Feel free to reopen if you seem to have additional trouble.
The issue is not resolved because if I try to use the image input Fooocus will crash with the same errors :< and it is not normal that in our server fooocus takes this amount of RAM, in google collab it works well ONLY with 12 GB of RAM and VRAM together, in our server we have 12 GB of VRAM and 64 of RAM.
Hi,
I have P40 with 24GB VRAM and also get this error.
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 10.00 MiB. GPU 0 has a total capacty of 23.87 GiB of which 3.62 MiB is free. Process 3973655 has 456.00 MiB memory in use. Process 3973917 has 2.97 GiB memory in use. Process 3606741 has 458.00 MiB memory in use. Process 3607780 has 2.82 GiB memory in use. Process 783706 has 17.18 GiB memory in use. Of the allocated memory 16.56 GiB is allocated by PyTorch, and 456.57 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Read Troubleshoot
[x] I admit that I have read the Troubleshoot before making this issue.
Describe the problem Hello guys, I am trying to launch Fooocus under a Ubuntu server 22.04 with two NVIDIA Tesla K80 (each with 12 GB of VRAM). When I launch a prompt Fooocus crashes saying that there is no VRAM because pytorch is using about 10 GB of VRAM only for him (why?). How can I solve this problem? I tried --lowvram but does not work. The pytorch version is the 2.0.1
Full Console Log