Open Kvento opened 10 hours ago
It's very close to what it needs, maybe you have some other software take up VRAM in Windows? With my large monitor and having browser up with many tabs etc. Windows can take 1-3GB VRAM on it's own.
No, my standard system video memory consumption is 0.6 - 0.9Gb, so before launching ComfyUI I close any software or utilities that can affect video memory consumption.
That's odd then, with fp8 it's not even taking 10GB for me... I'm on torch 2.4.1 cu124, not sure what else could affect it.
“2080ti doesn’t support BF16, so I can only use FP16, but FP16 throws NaN errors. I tried running it with FP8, but the memory usage is outrageous. Even with 22GB of VRAM, it still runs out of memory.” ![Uploading a805673598f147b74e6d7c3bd8fc8702.png…]()
I'm trying to run the model on my RTX 2080ti, but so far I haven't been able to succeed.
The video memory consumption is too high and when running with the standard workflow settings (loading the model on bf16) I get a crash and the message "Allocation on device". But if I enable the fp16 mode and model dtype fp8_e4m3fn then the model loads without crashes, but during the execution of the PyramidFlow Sampler block, the memory consumption is just crazy. This goes beyond the video card memory and Shared GPU Memory starts working and the processing time tends to infinity.
I read that others run the model with less memory consumption, so I think maybe I'm doing something wrong? I have already deleted and re-downloaded ComfyUI portable and the required nodes, but the result has not changed.