Closed GalaxyTimeMachine closed 2 months ago
This is likely due to the pipeLoader attempting to keep both the base and refiner model loaded... Do the models get reloaded each time using the built-in nodes? Could you share both node setups and any relevant log messages.. and I'll try to sort it out
There aren't really any log messages, it continues to work, just at an extremely slow pace. I'll upload the workflows when I'm back at the PC.
This shows the VRAM usage for the 1st image generation after starting ComfyUI:
The problem really starts are about 4 or 5 images have been generated, and this image is after the 3rd gen. Note how the shared memory is already starting to be used, and idle VRAM usage is over 20GB:
This was the workflow used:
Eventually it gets to the stage where nothing responds and I have to just close the ComfyUI server window to clear the memory:
Using this workflow:
This is how the VRAM usage looks on the 1st image generation, and shows the idle VRAM usage dropping to 14.9GB:
I can keep creating more images, and it ramps up to 18.1GB of VRAM usage, but then drops back down to 14.9GB and never uses shared VRAM.
I did initially report the problem in ComfyUI, but eventually traced it to the ttN nodes. There was one time an error was reported (maybe it helps?), and it was logged in the report at https://github.com/comfyanonymous/ComfyUI/issues/1332#issuecomment-1694444943
Is this still occurring after the latest push?
It is still using more than the available physical VRAM, and that is only creating a single SDXL image. I thought I could at least avoid it by not using the refiner now, but this is how that looks...
The PipeLoaderSDXL and PipeKSamplerSDXL combination are now unusable for me. I have an RTX4090 and the 24GB VRAM is maxed out, and shared VRAM is being used., which makes creating anything impossible. I've switched to the built-in KSampler (Advanced) nodes and the VRAM usage doesn't go above 20GB.