Open formulake opened 3 months ago
Quickworkaround, open worfklows/flux.json , in line 65, there is "unet_name": "flux1-dev.sft",
change it to flux1-schnell.sft then restart app.
To fix the issue you runing with result.map, can you provide your setup? .env file without api keys?
I had to change the file type to a .json so I could upload it here. Thanks for the unet quick fix. This will enable me to use Schnell but the step count will remain specific to the dev model. Do I have that right? Do I simply need to change that parameter in the flux.json file for it to follow the Schnell step count?
I'm running ComfyUI on the defaut port (8188). I've placed the files (unet, clip and vae) where they belong so there are no errors reported with missing files when it sends the prompt over for inferencing. I've got OLLAMA running with Gemma2. I tried LLAMA3.1 first but I don't think that was able to output the json that flux-magic expected. At least, that's what I gathered from the error.
Repo updated, it supports schnell as well. you can select it in UI,
just make sure you have "flux1-dev.sft" or "flux1-schell.sft" files - exactly this name
Thanks for addressing this. It works now and the image gets generated but it doesn't pop up in the UI. All the steps are run and the image can be found in the flux-magic/tmp directory though.
Here's what the OLLAMA server returns:
Actual generation time: 27713 ms
TypeError: result.map is not a function
at file:///H:/Tools/flux-magic-local/app.js:165:83
And here's what the ComfyUI server returns:
model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
model_type FLOW
Model doesn't have a device attribute.
clip missing: ['text_projection.weight']
Requested to load FluxClipModel_
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.71 seconds
Warning: TAESD previews enabled, but could not find models/vae_approx/None
Requested to load Flux
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 2.02 seconds
100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:03<00:00, 1.56it/s]
Using xformers attention in VAE
Using xformers attention in VAE
Model doesn't have a device attribute.
Requested to load AutoencodingEngine
Loading 1 new model
Prompt executed in 27.43 seconds
While the Replicate API approach allows you to select which version of flux to run, the local approach defaults to the dev model. Can you make it so that schnell can be run locally? Additionally, the inference runs and then returns this error: