sipie800 / ComfyUI-PuLID-Flux-Enhanced

Apache License 2.0
52 stars 7 forks source link

expected scalar type Half but found BFloat16 #15

Open nadora35 opened 3 weeks ago

nadora35 commented 3 weeks ago

expected scalar type Half but found BFloat16

2024-10-31 04_46_52-_Unsaved Workflow - ComfyUI

zaccheus commented 3 weeks ago

same error

sipie800 commented 2 weeks ago

This error may be that comfyui autocasts data type to bf16, but bf16 is supported only by a few newer nvidia GPUs. They mistake your GPU to the bf16 ones. However even though comfyui may make it running with fp16 which is supported by every GPU, the fp16 needs larger VRAM than bf16. FLUX PULID may just fail to run because even with bf16, it consumes huge VRAM which is nearly exploding a 4090. Please report to comfyui. Or my best suggestion will be updating your GPU.

nadora35 commented 2 weeks ago

but from weak ago i test it and it works .... how is that !?

sipie800 commented 2 weeks ago

Basically, I never updated anything that could relates to bf16 or such things.

anwoflow commented 1 week ago

@nadora35 Have you found a solution for this?

wandrzej commented 1 week ago

Basically, I never updated anything that could relates to bf16 or such things.

You literally have this in the code :)

        device = comfy.model_management.get_torch_device()
        # Why should I care what args say, when the unet model has a different dtype?!
        # Am I missing something?!
        #dtype = comfy.model_management.unet_dtype()
        dtype = model.model.diffusion_model.dtype
        # For 8bit use bfloat16 (because ufunc_add_CUDA is not implemented)
        if dtype in [torch.float8_e4m3fn, torch.float8_e5m2]:
            dtype = torch.bfloat16
stranger-games commented 4 days ago

Basically, I never updated anything that could relates to bf16 or such things.

You literally have this in the code :)

        device = comfy.model_management.get_torch_device()
        # Why should I care what args say, when the unet model has a different dtype?!
        # Am I missing something?!
        #dtype = comfy.model_management.unet_dtype()
        dtype = model.model.diffusion_model.dtype
        # For 8bit use bfloat16 (because ufunc_add_CUDA is not implemented)
        if dtype in [torch.float8_e4m3fn, torch.float8_e5m2]:
            dtype = torch.bfloat16

I am not sure if that's where specifically the problem is because the same error happens even when using the flux1-dev rather than the flux1-dev-fp8