comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
52.41k stars 5.53k forks source link

Cannot handle this data type: (1, 1, 3), <f4 #3539

Open Maveyyl opened 4 months ago

Maveyyl commented 4 months ago

Hi,

After 2 days without using, I updated comfyUI and now I get this error when I try to sample anything, seemingly happens when it tries to show a preview:

!!! Exception during processing!!! Cannot handle this data type: (1, 1, 3), <f4 Traceback (most recent call last): File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\PIL\Image.py", line 3130, in fromarray mode, rawmode = _fromarray_typemap[typekey] KeyError: ((1, 1, 3), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\nodes.py", line 1344, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\nodes.py", line 1314, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample return original_sample(*args, *kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 313, in motion_sample return orig_comfy_sample(model, noise, args, kwargs) File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\comfy\sample.py", line 37, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\comfy\samplers.py", line 761, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\comfy\samplers.py", line 663, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\comfy\samplers.py", line 650, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\comfy\samplers.py", line 629, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\comfy\samplers.py", line 534, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, *self.extra_options) File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, **kwargs) File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\comfy\k_diffusion\sampling.py", line 585, in sample_dpmpp_2m callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised}) File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\comfy\samplers.py", line 532, in k_callback = lambda x: callback(x["i"], x["denoised"], x["x"], total_steps) File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\latent_preview.py", line 94, in callback preview_bytes = previewer.decode_latent_to_preview_image(preview_format, x0) File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\latent_preview.py", line 18, in decode_latent_to_preview_image preview_image = self.decode_latent_to_preview(x0) File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\latent_preview.py", line 48, in decode_latent_to_preview return Image.fromarray(latents_ubyte.numpy()) File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\PIL\Image.py", line 3134, in fromarray raise TypeError(msg) from e TypeError: Cannot handle this data type: (1, 1, 3), <f4

Maveyyl commented 4 months ago

After investigating a little the file latent_preview.py, function decode_latent_to_preview, which was modified a few days ago, the values are transformed into [0.0, 255.0], but the dtype stays torch.float32 instead of becoming torch.uint8.

For some reason the "to" method doesn't do the type change unless you do it alone like this:

        latents_ubyte = (((latent_image + 1) / 2)
                            .clamp(0, 1)  # change scale from -1..1 to 0..1
                            .mul(0xFF)  # to 0..255
                            )
        latents_ubyte = latents_ubyte.to(dtype=torch.uint8)
        latents_ubyte = latents_ubyte.to(device="cpu", dtype=torch.uint8, non_blocking=True)

Not sure if it doesn't beat the purpose though. Hope it helps.

Maveyyl commented 4 months ago

The OS fix doesn't work for my windows 11 + AMD CPU + AMD GPU.

dnswd commented 4 months ago

@Maveyyl Thanks for the latent_preview.py clue. I tried yours but the result is blank. Tried using:

        latents_ubyte = (((latent_image + 1) / 2)
                            .clamp(0, 1)  # change scale from -1..1 to 0..1
                            .mul(0xFF)  # to 0..255
                            )
        latents_ubyte = latents_ubyte.to(dtype=torch.uint8)
        latents_ubyte = latents_ubyte.to(device="cpu", dtype=torch.uint8, non_blocking=comfy.model_management.device_supports_non_blocking(latent_image.device))

and it's working perfectly. I'm not sure why though maybe this issue is AMD specific, but I hope this helps for others.

OS: Windows 10 x86_64
CPU: AMD Ryzen 7 5700X (16) @ 3.393GHz
GPU: AMD Radeon RX 6700 XT
NeedsMoar commented 4 months ago

I could almost guarantee that AMD devices don't support non-blocking anything on Windows (especially not with DirectML).
None of the OpenCL extensions required to do it are there, the only way you'd get something like it is resizable bar enabled but since that's cache coherent I don't think the device itself considers it non-blocking even if the CPU can unless it needs to access it. Knowing the DirectML backend, setting it to true uses the flag anyway but incorrectly and doesn't wait until it finishes when the CPU tries to access it like it should which results in broken images.

comfyanonymous commented 4 months ago

If that's the case the right fix is adding a:

    if directml_enabled:
        return False

Here: https://github.com/comfyanonymous/ComfyUI/blob/master/comfy/model_management.py#L630

traugdor commented 4 months ago

If that's the case the right fix is adding a:

    if directml_enabled:
        return False

Here: https://github.com/comfyanonymous/ComfyUI/blob/master/comfy/model_management.py#L630

Can confirm this is working on my Radeon 6600XT

tisThivas commented 4 months ago

After trying all of the suggested solutions, the only thing that worked was redownload and replace the latent_preview.py for an older one. In my case the one from 11 Mar 2024 was enough. It might not be a solution, but it's workaround for the moment.

djdoubt03 commented 3 months ago

Having same Issue, I'm using Stability Matrix with ComfyUI. It used to work but I removed a few weeks ago and decide to try again. CPU: AMD Ryzen 5 5600 and GPU: AMD Radeon RX 6700XT.

ERRORS:

G:\SD\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\torch_dynamo\external_utils.py:17: UserWarning: Set seed for privateuseone device does not take effect, please add API's _is_in_bad_fork and manual_seed_all to privateuseone device module. return fn(*args, **kwargs) Requested to load BaseModel Loading 1 new model loading in lowvram mode 64.0 G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py:655: UserWarning: The operator 'aten::count_nonzero.dim_IntList' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at C:__w\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\dml_cpu_fallback.cpp:17.) if latent_image is not None and torch.count_nonzero(latent_image) > 0: #Don't shift the empty latent image. 0%| | 0/20 [00:08<?, ?it/s] !!! Exception during processing!!! Cannot handle this data type: (1, 1, 3), <f4 Traceback (most recent call last): File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\PIL\Image.py", line 3130, in fromarray mode, rawmode = _fromarray_typemap[typekey] KeyError: ((1, 1, 3), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\nodes.py", line 1355, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\nodes.py", line 1325, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\sample.py", line 43, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 794, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 696, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 683, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 662, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 567, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\k_diffusion\sampling.py", line 140, in sample_euler callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised}) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 565, in k_callback = lambda x: callback(x["i"], x["denoised"], x["x"], total_steps) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\latent_preview.py", line 91, in callback preview_bytes = previewer.decode_latent_to_preview_image(preview_format, x0) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\latent_preview.py", line 26, in decode_latent_to_preview_image preview_image = self.decode_latent_to_preview(x0) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\latent_preview.py", line 45, in decode_latent_to_preview return preview_to_image(latent_image) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\latent_preview.py", line 19, in preview_to_image return Image.fromarray(latents_ubyte.numpy()) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\PIL\Image.py", line 3134, in fromarray raise TypeError(msg) from e TypeError: Cannot handle this data type: (1, 1, 3), <f4

traugdor commented 3 months ago

Having same Issue, I'm using Stability Matrix with ComfyUI. It used to work but I removed a few weeks ago and decide to try again. CPU: AMD Ryzen 5 5600 and GPU: AMD Radeon RX 6700XT.

ERRORS:

G:\SD\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\torch_dynamo\external_utils.py:17: UserWarning: Set seed for privateuseone device does not take effect, please add API's _is_in_bad_fork and manual_seed_all to privateuseone device module. return fn(*args, **kwargs) Requested to load BaseModel Loading 1 new model loading in lowvram mode 64.0 G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py:655: UserWarning: The operator 'aten::count_nonzero.dim_IntList' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at C:__w\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\dml_cpu_fallback.cpp:17.) if latent_image is not None and torch.count_nonzero(latent_image) > 0: #Don't shift the empty latent image. 0%| | 0/20 [00:08<?, ?it/s] !!! Exception during processing!!! Cannot handle this data type: (1, 1, 3), <f4 Traceback (most recent call last): File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\PIL\Image.py", line 3130, in fromarray mode, rawmode = _fromarray_typemap[typekey] KeyError: ((1, 1, 3), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\nodes.py", line 1355, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\nodes.py", line 1325, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\sample.py", line 43, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 794, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 696, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 683, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 662, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 567, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\k_diffusion\sampling.py", line 140, in sample_euler callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised}) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 565, in k_callback = lambda x: callback(x["i"], x["denoised"], x["x"], total_steps) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\latent_preview.py", line 91, in callback preview_bytes = previewer.decode_latent_to_preview_image(preview_format, x0) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\latent_preview.py", line 26, in decode_latent_to_preview_image preview_image = self.decode_latent_to_preview(x0) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\latent_preview.py", line 45, in decode_latent_to_preview return preview_to_image(latent_image) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\latent_preview.py", line 19, in preview_to_image return Image.fromarray(latents_ubyte.numpy()) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\PIL\Image.py", line 3134, in fromarray raise TypeError(msg) from e TypeError: Cannot handle this data type: (1, 1, 3), <f4

Try changing your latent_preview.py file method:

def preview_to_image(latent_image):
        latents_ubyte = (((latent_image + 1) / 2)
                            .clamp(0, 1)  # change scale from -1..1 to 0..1
                            .mul(0xFF)  # to 0..255
                            )
        latents_ubyte = latents_ubyte.to(dtype=torch.uint8)
        latents_ubyte = latents_ubyte.to(device="cpu", dtype=torch.uint8, non_blocking=comfy.model_management.device_supports_non_blocking(latent_image.device))

        return Image.fromarray(latents_ubyte.numpy())
gcfrantz2009 commented 3 months ago

I initially got the cannot handle data type error, and the fix above, updating the preview_to_image method in latent_preview.py got me passed that, but now I'm getting blank output.

Ryzne 7950x3d/Radeon 7900xtx

traugdor commented 3 months ago

Sounds like a NaN issue or something else is going on. Can you share a screenshot? I get blank images from time to time which is almost always a driver issue. You can try to reset your GPU with this python script. You must run Python in an administrator window.

import subprocess

def restart_gpu_driver():
    # Define the command to get the Instance ID of the GPU device
    command = 'powershell "Get-PnpDevice | Where-Object { ($_.Class -eq \'Display\' ) -and ($_.Status -eq \'OK\')} | Select-Object -ExpandProperty InstanceId"'

    try:
        # Get Instance ID and strip any extra whitespace or newline characters
        instanceID = subprocess.check_output(command, shell=True, text=True).strip()
        print("Running on the selected GPU: " + instanceID)

        # Define the commands to disable and enable the GPU device
        disable_command = f'powershell "Disable-PnpDevice -InstanceId \'{instanceID}\' -Confirm:$false"'
        enable_command = f'powershell "Enable-PnpDevice -InstanceId \'{instanceID}\' -Confirm:$false"'

        # Disable the GPU device
        subprocess.run(disable_command, shell=True, check=True)
        print("GPU disabled successfully.")

        # Enable the GPU device
        subprocess.run(enable_command, shell=True, check=True)
        print("GPU enabled successfully.")
    except subprocess.CalledProcessError as e:
        print(f"An error occurred: {e}")

# Call the function to restart the GPU driver
restart_gpu_driver()

your screen will flash. This script usually fixes any NaN issues I have with my 6600xt.

ansaus commented 3 months ago

If that's the case the right fix is adding a:

    if directml_enabled:
        return False

Here: https://github.com/comfyanonymous/ComfyUI/blob/master/comfy/model_management.py#L630

this ones helped in addition with fix in latent_preview.py (probalby even not needed)

traugdor commented 3 months ago

Forcing uint8 was for AMD devices, specifically

voyager5874 commented 2 months ago

It goes away if preview disabled via manager, suggestions above didn't work for me (RX570 8gb VRAM, 32gb RAM --directml) https://www.reddit.com/r/StableDiffusion/comments/1cx2sqg/cmfyui_typeerror_cannot_handle_this_data_type_1_1/

zyzz15620 commented 2 months ago

@dnswd Thanks! I tried your and it working, my GPU is AMD rx580

    latents_ubyte = (((latent_image + 1) / 2)
                        .clamp(0, 1)  # change scale from -1..1 to 0..1
                        .mul(0xFF)  # to 0..255
                        )
    latents_ubyte = latents_ubyte.to(dtype=torch.uint8)
    latents_ubyte = latents_ubyte.to(device="cpu", dtype=torch.uint8, non_blocking=comfy.model_management.device_supports_non_blocking(latent_image.device))