Closed SheprStone closed 1 day ago
That low GPU usage sounds like you are simply running out of VRAM, are you monitoring that? By default if VRAM isn't enough, Nvidia drivers in Windows will offload it to RAM and it's incredibly slow.
That low GPU usage sounds like you are simply running out of VRAM, are you monitoring that? By default if VRAM isn't enough, Nvidia drivers in Windows will offload it to RAM and it's incredibly slow.
thanks for the reply! i have 32gb of RAM. When just start working, RAM is slowly going to 100%, but when it comes to processing samples, RAM has 20-30%. and GPU is 0-7% as it was and still is ![Uploading oBWen3sxTPU.jpg…]()
I was talking about VRAM, the GPU memory, which on most 3080s is very little for these kind of models.
ah... I got it. finally I can finally release this program with peace of mind until better times and stop raping my computer and gpt chat. thanks a lot for the answer. I will try to look for alternatives that can pull my system requirements.
Hi, I’m experiencing an issue with ComfyUI-MochiWrapper while trying to generate videos. I’m using an NVIDIA RTX 3080 GPU, but during generation, the GPU usage stays at 4–5%, even though I’ve seen other users report GPU usage near 100%. Additionally, the generated videos are of very low quality (severe pixelation and distortions).
Steps I’ve Tried to Fix the Issue:
Hardware and Driver Verification:
Running on Windows 10 NVIDIA drivers are up-to-date. CUDA and PyTorch are installed and verified as functioning (tested with other neural networks where GPU usage reaches 30–40%, confirming the hardware is working correctly). Launch Settings:
Added various flags such as --force-fp16, --highvram, and --use-split-cross-attention. Tested different memory optimization levels (--normalvram, --lowvram). Tried enabling and disabling xFormers (--disable-xformers). Model and VAE Setup:
Verified that the VAE and model versions match (currently using bf16). Double-checked file paths in models/vae and models/diffusion_models. Observed Behavior:
Videos are generated, but the quality is very poor. GPU usage remains extremely low. Other neural network models on the same system generate images/videos with proper GPU utilization (30–40%). every time it came to generating video, it would hang on the line "Processing Samples: 0%| 0/30 [00:00<?, ?it/" I've been puzzling over this for almost a week now, and everyone on YouTube is doing it. PLEASE HELP