Ling-APE / ComfyUI-All-in-One-FluxDev-Workflow

An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more.
180 stars 8 forks source link

Upscale(Hi-resFix) with Tiled Diffusion RuntimeError: The size of tensor a (4) must match the size of tensor b (2) at non-singleton dimension 0 #4

Open Amit30swgoh opened 3 months ago

Amit30swgoh commented 3 months ago

RuntimeError: The size of tensor a (4) must match the size of tensor b (2) at non-singleton dimension 0

IterativeLatentUpscale[1/4]: 1452.0x1089.0 (scale:1.11) !!! Exception during processing!!! The size of tensor a (4) must match the size of tensor b (2) at non-singleton dimension 0

Amit30swgoh commented 3 months ago

image image Error occurred when executing IterativeImageUpscale:

The size of tensor a (4) must match the size of tensor b (2) at non-singleton dimension 0

File "/content/drive/MyDrive/ComfyUI/execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/content/drive/MyDrive/ComfyUI/execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/content/drive/MyDrive/ComfyUI/execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/impact_pack.py", line 1283, in doit refined_latent = IterativeLatentUpscale().doit(latent, upscale_factor, steps, temp_prefix, upscaler, step_mode, unique_id) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/impact_pack.py", line 1237, in doit current_latent = upscaler.upscale_shape(step_info, current_latent, new_w, new_h, temp_prefix) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/core.py", line 1704, in upscale_shape refined_latent = self.sample(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, upscaled_latent, denoise, upscaled_images) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/core.py", line 1645, in sample refined_latent = impact_sampling.impact_sample(model, seed, steps, cfg, sampler_name, scheduler, File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/impact_sampling.py", line 226, in impact_sample return separated_sample(model, True, seed, advanced_steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/impact_sampling.py", line 214, in separated_sample res = sample_with_custom_noise(model, add_noise, seed, cfg, positive, negative, impact_sampler, sigmas, latent_image, noise=noise, callback=callback) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/impact_sampling.py", line 158, in sample_with_custom_noise samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 434, in motion_sample return orig_comfy_sample(model, noise, *args, *kwargs) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/adv_control/sampling.py", line 116, in acn_sample return orig_comfy_sample(model, args, kwargs) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/adv_control/utils.py", line 116, in uncond_multiplier_check_cn_sample return orig_comfy_sample(model, args, kwargs) File "/content/drive/MyDrive/ComfyUI/comfy/sample.py", line 48, in sample_custom samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) File "/content/drive/MyDrive/ComfyUI/comfy/samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "/content/drive/MyDrive/ComfyUI/comfy/samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "/content/drive/MyDrive/ComfyUI/comfy/samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) File "/content/drive/MyDrive/ComfyUI/comfy/samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "/content/drive/MyDrive/ComfyUI/comfy/k_diffusion/sampling.py", line 600, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, *extra_args) File "/content/drive/MyDrive/ComfyUI/comfy/samplers.py", line 299, in call out = self.inner_model(x, sigma, model_options=model_options, seed=seed) File "/content/drive/MyDrive/ComfyUI/comfy/samplers.py", line 682, in call return self.predict_noise(args, kwargs) File "/content/drive/MyDrive/ComfyUI/comfy/samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) File "/content/drive/MyDrive/ComfyUI/comfy/samplers.py", line 279, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-TiledDiffusion/.patches.py", line 89, in calc_cond_batch output = model_options['model_function_wrapper'](model.apply_model, {"input": inputx, "timestep": timestep, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks) File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-TiledDiffusion/tiled_diffusion.py", line 424, in call x_tile_out = model_function(x_tile, ts_tile, c_tile) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/adv_control/utils.py", line 68, in apply_model_uncond_cleanup_wrapper return orig_apply_model(self, args, kwargs) File "/content/drive/MyDrive/ComfyUI/comfy/model_base.py", line 145, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, extra_conds).float() File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(args, **kwargs) File "/content/drive/MyDrive/ComfyUI/comfy/ldm/flux/model.py", line 150, in forward out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control) File "/content/drive/MyDrive/ComfyUI/comfy/ldm/flux/model.py", line 109, in forward_orig vec = vec + self.guidance_in(timestep_embedding(guidance, 256).to(img.dtype))

Close

Jeru_Flux_nice

RAM 31% VRAM 52%

Manager

Queue0

RuntimeError 367 IterativeImageUpscale ↹ Resize Feed Feed Size...

Column count...

Clear ❌ title input TiledDiffusion group an image that is highly detailed with natural textures and intricate patterns, showcasing realistic lighting and deep shadows to give the scene a sense of depth. The focus should be clearly defined, making every object appear sharply focused with pin-sharp details. Horror,2d ,anime ,drawing

🔊

🔊 output output You are using image to image and controlnet together which is not the way it is intended, switch to an empty latent image instead in the switch node in the workflow and you should be good to go. And if you want to use the original control net image’s dimensions just create a get image resolution node from the image and connect the width and height output to the empty latent node, use that instead. Thanks addressing this issue I’ll add this option to the next version too, didn’t thought about it when I make the workflow.

Amit30swgoh commented 3 months ago

it doesnt work no matter what

image

arthurwolf commented 3 months ago

Note: your image above is just too low resolution to be useful/readable.