comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
50.45k stars 5.3k forks source link

!!! Exception during processing!!! Trying to convert Float8_e5m2 to the MPS backend but it does not have support for that dtype. #4242

Open luohui1102 opened 1 month ago

luohui1102 commented 1 month ago

Your question

"Hello! I'm using an Apple M1 chip, and I'm encountering MPS issues when running many nodes. Are there any solutions?"

Logs

!!! Exception during processing!!! Trying to convert Float8_e5m2 to the MPS backend but it does not have support for that dtype.
Traceback (most recent call last):
  File "/Users/llh/pinokio/api/comfyui.git/app/execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "/Users/llh/pinokio/api/comfyui.git/app/execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "/Users/llh/pinokio/api/comfyui.git/app/execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "/Users/llh/pinokio/api/comfyui.git/app/comfy_extras/nodes_custom_sampler.py", line 612, in sample
    samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)
  File "/Users/llh/pinokio/api/comfyui.git/app/comfy/samplers.py", line 716, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "/Users/llh/pinokio/api/comfyui.git/app/comfy/samplers.py", line 695, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
  File "/Users/llh/pinokio/api/comfyui.git/app/comfy/samplers.py", line 600, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
  File "/Users/llh/pinokio/api/comfyui.git/app/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/llh/pinokio/api/comfyui.git/app/comfy/k_diffusion/sampling.py", line 143, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "/Users/llh/pinokio/api/comfyui.git/app/comfy/samplers.py", line 299, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
  File "/Users/llh/pinokio/api/comfyui.git/app/comfy/samplers.py", line 682, in __call__
    return self.predict_noise(*args, **kwargs)
  File "/Users/llh/pinokio/api/comfyui.git/app/comfy/samplers.py", line 685, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
  File "/Users/llh/pinokio/api/comfyui.git/app/comfy/samplers.py", line 279, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
  File "/Users/llh/pinokio/api/comfyui.git/app/comfy/samplers.py", line 228, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "/Users/llh/pinokio/api/comfyui.git/app/comfy/model_base.py", line 122, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "/Users/llh/pinokio/api/comfyui.git/app/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/Users/llh/pinokio/api/comfyui.git/app/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/llh/pinokio/api/comfyui.git/app/comfy/ldm/flux/model.py", line 143, in forward
    out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance)
  File "/Users/llh/pinokio/api/comfyui.git/app/comfy/ldm/flux/model.py", line 101, in forward_orig
    img = self.img_in(img)
  File "/Users/llh/pinokio/api/comfyui.git/app/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/Users/llh/pinokio/api/comfyui.git/app/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/llh/pinokio/api/comfyui.git/app/comfy/ops.py", line 63, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
  File "/Users/llh/pinokio/api/comfyui.git/app/comfy/ops.py", line 58, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)
  File "/Users/llh/pinokio/api/comfyui.git/app/comfy/ops.py", line 39, in cast_bias_weight
    bias = cast_to(s.bias, dtype, device, non_blocking=non_blocking)
  File "/Users/llh/pinokio/api/comfyui.git/app/comfy/ops.py", line 24, in cast_to
    return weight.to(device=device, dtype=dtype, non_blocking=non_blocking)
TypeError: Trying to convert Float8_e5m2 to the MPS backend but it does not have support for that dtype.

Other

WeChat24013e2259d9a5e77472aa78c973a2e3

salinas707 commented 1 month ago

In the load diffusion model node, try changing the weight_dtype to Default (16bit).

luohui1102 commented 1 month ago

Thank you very much, it has been resolved.

bharattrader commented 1 month ago

Thank you very much, it has been resolved.

How much unified memory do you have? Thanks

rachelcenter commented 2 weeks ago

In the load diffusion model node, try changing the weight_dtype to Default (16bit).

i chose default instead of fp8_e4m3fn and got this (see image below). I'm on a m2 Mac with 128gb ram. first flux test_00001_