patientx / ComfyUI-Zluda

The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface. Now ZLUDA enhanced for better AMD GPU performance.
GNU General Public License v3.0
162 stars 11 forks source link

Unable to Get Flux Working #20

Closed markleaf131313 closed 2 months ago

markleaf131313 commented 3 months ago

Your question

I'm using the default provided workflows from the comfyui site here. Using the regular ones I get a conversion error (as seen in the log) which outputs a blank image. If I try to use gguf loader and models instead I get the outputted image below.

ComfyUI_00002_

I've also tried the fp8 models from the same site, it just doesn't load at all. The workflow uses checkpoints but when I use the fp8 files as checkpoints I get an error that the checkpoint file type isn't recognized.

For reference I was able to use SDXL example from comfyui site just fine, no issues. I am using a 6750 XT.

Logs

got prompt
Using split attention in VAE
Using split attention in VAE
model weight dtype torch.bfloat16, manual cast: None
model_type FLUX
Requested to load FluxClipModel_
Loading 1 new model
loaded completely 0.0 9319.23095703125 True
clip missing: ['text_projection.weight']
Requested to load Flux
Loading 1 new model
loaded partially 9690.73859375 9690.732543945312 0
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [01:59<00:00,  5.98s/it]
Requested to load AutoencodingEngine
Loading 1 new model
loaded completely 0.0 319.7467155456543 True
C:\ComfyUI-Zluda\nodes.py:1498: RuntimeWarning: invalid value encountered in cast
  img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))
Prompt executed in 227.42 seconds

Other

No response

patientx commented 3 months ago

https://pastebin.com/3wgkYC0a

save this as anyname.json , load in comfyui, check manager for "install missing custom nodes" , this requires gguf version of schnell (dev also works) which can be downloaded from here : (https://huggingface.co/city96/FLUX.1-schnell-gguf/tree/main) , I use 4.0 it is enough, also for t5 clip encoder I use standard clip_l and t5xxl_fp8_e4m3fn.safetensors , (gguf versions of t5 clip exists but somehow using them slows the thing further)

Edit : to be clear, I use https://huggingface.co/city96/FLUX.1-dev-gguf/blob/main/flux1-dev-Q4_0.gguf for dev (at least 20 steps) https://huggingface.co/city96/FLUX.1-schnell-gguf/blob/main/flux1-schnell-Q4_0.gguf for schnell (4 steps)