Closed achiever1984 closed 2 hours ago
Same issue
+!
Yep. Same here unfortunately. (using Mac Studio M1 Max)
same here!
same issue!
same issue!
I have the same issue on M2 Pro.
I use webui version from February 5th Python v3.10 Flux flux1-dev-bnb-nf4-v2.safetensors & flux1-dev-bnb-nf4.safetensors
Strangely I have been having the exact same error with my AMD card on Linux.
@achiever1984 I can get it to work with the official Flux Dev release from Huggingface https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main
I agree that the flux1-dev-bnb-nf4.safetensors doesn't work - apologies if this doesn't help you.
@achiever1984 I can get it to work with the official Flux Dev release from Huggingface https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main
I agree that the flux1-dev-bnb-nf4.safetensors doesn't work - apologies if this doesn't help you.
@achiever1984 please try t5xxl_fp8_e4m3fn.safetensors
instead of fp16
. For me it started working after pulling changes from @conornash on MBP M3 Pro
UPD. ah, never mind, I see @conornash used fp16 encoder as well
@achiever1984 please try
t5xxl_fp8_e4m3fn.safetensors
instead offp16
. For me it started working after pulling changes from @conornash on MBP M3 ProUPD. ah, never mind, I see @conornash used fp16 encoder as well
m2 show this error TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
it is strange and very checkpoint dependant. For example with the dev fp8 and the fp16 t5xxl I get this which makes not sense.
TypeError: Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype. Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
When I deflect the Fp16 T5: AssertionError: You do not have T5 state dict! You do not have T5 state dict!
Also the FP16 T5 should be ok it is about twice the size of the FP8.
So yes I get the same error no matter if I select the FP16 or FP8 which makes no sense.
One more thing, turning this option on or off does not make a difference: Enable T5 (load T5 text encoder; increases VRAM use by a lot, potentially improving quality of generation; requires model reload to apply)
Just checked out @conornash 's branch and for the first time I was able to load a flux model on my Apple M1 Max 32GB
. ❤️
Test forgeui branch from @conornash but same problem:
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
^CInterrupted with signal 2 in <frame at 0x31b61f840, file '/Users/barto/stable-diffusion-webui-forge-fork/modules_forge/main_thread.py', line 43, code loop>
withfp8 or FP16 same problem:
info mac: Apple M1 MAX Memory: 64gb
:(
+!
same issue Apple M1, Macbook Pro
Had the same issues on a M2 Ultra (TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.) Replaced my flux.py file with the one from @conornash and started working but didnt test exhaustively , hopefully it will be merged with main branch but anyone can replace that file:
Download RAW file and replace accordingly: https://github.com/lllyasviel/stable-diffusion-webui-forge/blob/643c1089ca150294d96470b6d5f2bd73e0bd3da3/backend/nn/flux.py#L1
Tested with Flux Dev, AFAIK NF4 will not work , not an expert but something about not being compatible with GPU and at least for SwarmUI some Flux spins seem dependent on bitsandbytes being ported to Macs.
EDIT2:Seems to be working with GUFF as well (but couldnt make it work with Schnell) , didnt notice any speed improvements with Q8 tought,
After replacing flux.py, I get this error :
NotImplementedError: The operator 'aten::rshift.Scalar' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable
PYTORCH_ENABLE_MPS_FALLBACK=1
to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
There is already export PYTORCH_ENABLE_MPS_FALLBACK=1
in webui-macos-env.sh, I don't know if it's supposed to be added somewhere else.
I'm using Stability Matrix, I'm gonna try with a vanilla installation.
Dont wanna hijack this thread but its relevant I guess, for some reason after updating today (git pull) Flux stopped working as it should , images cant resolve (noisy) , same settings as before.
EDIT: My bad, it seems things are changing fast and at least for Flux@Forge need to check things properly, one day it works the next day it doesnt (Euleur a):
Hello.
When I try to generate an image in Flux mode using the flux1-dev-bnb-nf4.safetensors model on my macbook, I get the following error:
What can I do to fix this?