Closed ClothingAI closed 1 month ago
I can't see your error message. From my experience, the problem is not about sampler. your unet loader with gguf may be the issue. please try feed back to the gguf loader node. Yet if the thing is not about node, do you use some GPU which does not support bfloat16 ?
Quadro 6000, does that not support it? If no, what to do then? It has 24G vRAM
bf16 is supported with at least compute capability 8.0, quandro 6000 may be 7.5. I'm not quite an expert about these things. you need to use fp16 checkpoint of flux as long as the regular unet loader node.
could not even make a regular flux pulid workflow work, can you suggest me a json with fp16 please? with precise name of models so I have no doubt no error
may try pulid_flux_16bit_simple.json. Be aware that it's may consume more VRAM. You can just grab and use a fp8 checkpoint of flux dev in the fp16 workflow, it's more feasible.
may try pulid_flux_16bit_simple.json. Be aware that it's may consume more VRAM. You can just grab and use a fp8 checkpoint of flux dev in the fp16 workflow, it's more feasible.
Thanks! I have enough VRAM it seems:
But I just keep getting a specific error apparently: https://github.com/balazik/ComfyUI-PuLID-Flux/issues/33#issuecomment-2434144823
Even this:
I keep getting this error. and the workflow stops At the sampler custom advanced:
I dont understand what todo:
What is the problem?