gokayfem / ComfyUI_VLM_nodes

Custom ComfyUI nodes for Vision Language Models, Large Language Models, Image to Music, Text to Music, Consistent and Random Creative Prompt Generation
Apache License 2.0
297 stars 23 forks source link

The VRAM usage is too high and cannot be released. #67

Open QL-boy opened 2 months ago

QL-boy commented 2 months ago

image image

The model will not be unloaded from the VRAM after each generation, and using multiple identical nodes will load the model multiple times, resulting in high VRAM usage.

The screenshot shows the LLM VRAM usage after running the workflow once after a fresh boot and automatically unloading the SD model.

Even using --disable-smart-memory doesn't help.

Even if I use a 4090 graphics card, I still can't bear this consumption.

Is there any way to automatically unload the model from the VRAM after each generation? Or is there any other solution that can reduce the model's video memory usage?

gokayfem commented 2 months ago

im working on the gpu memory release after generation. i will add this to all of the vlm nodes.