Open nadora35 opened 3 weeks ago
same error
This error may be that comfyui autocasts data type to bf16, but bf16 is supported only by a few newer nvidia GPUs. They mistake your GPU to the bf16 ones. However even though comfyui may make it running with fp16 which is supported by every GPU, the fp16 needs larger VRAM than bf16. FLUX PULID may just fail to run because even with bf16, it consumes huge VRAM which is nearly exploding a 4090. Please report to comfyui. Or my best suggestion will be updating your GPU.
but from weak ago i test it and it works .... how is that !?
Basically, I never updated anything that could relates to bf16 or such things.
@nadora35 Have you found a solution for this?
Basically, I never updated anything that could relates to bf16 or such things.
You literally have this in the code :)
device = comfy.model_management.get_torch_device()
# Why should I care what args say, when the unet model has a different dtype?!
# Am I missing something?!
#dtype = comfy.model_management.unet_dtype()
dtype = model.model.diffusion_model.dtype
# For 8bit use bfloat16 (because ufunc_add_CUDA is not implemented)
if dtype in [torch.float8_e4m3fn, torch.float8_e5m2]:
dtype = torch.bfloat16
Basically, I never updated anything that could relates to bf16 or such things.
You literally have this in the code :)
device = comfy.model_management.get_torch_device() # Why should I care what args say, when the unet model has a different dtype?! # Am I missing something?! #dtype = comfy.model_management.unet_dtype() dtype = model.model.diffusion_model.dtype # For 8bit use bfloat16 (because ufunc_add_CUDA is not implemented) if dtype in [torch.float8_e4m3fn, torch.float8_e5m2]: dtype = torch.bfloat16
I am not sure if that's where specifically the problem is because the same error happens even when using the flux1-dev rather than the flux1-dev-fp8
expected scalar type Half but found BFloat16