Closed Codeschreibaer closed 4 months ago
Reference, Face, Style, Composition all use IP-Adapter (https://github.com/cubiq/ComfyUI_IPAdapter_plus) The error is probably related to update of this dependency (which is however necessary to get the style/composition functionality).
I think this issue has been reported multiple times, but can easily crop up again. It's probably not easy to test or fix because it's hardware-specific, seems to happen on the 10xx GPUs. Latest discussion here: https://github.com/cubiq/ComfyUI_IPAdapter_plus/issues/108#issuecomment-2041448418
Edit: Got it to work as well for SDXL Models with "server_arguments": "--dont-upcast-attention --force-fp16", Is there a possibility, to have this setting coded only for sdxl modles? (what a mess.)
Edit: As it seems, some systems have even more issus on 30xx than on 10xx, Link: https://www.youtube.com/watch?v=Ngt5Oqa7aqE&lc=Ugx0xb5uhdnqqbVi2z94AaABAg.A2gCz2uY35NA2gJLWifDEH There seems to be something wrong in this version.
Edit: Here i s al link to same (?) issue with automatic1111 and rtx20xx series, though i cant see if ip adapter plus is involved. I mention it despite, because you can see possibly, and cause RTX has a differetn architecture than GTX10xx. https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/10853
Thanks for the link. Everthing is working with 1.5 models, i found out by your hint. But thats on ly abetter work around. Also the error message itself is suspicious for me. Other people assumed a bug in comfy.
So what is planned? According to other comments, esp the youtube comments on streamtabulous' post it not only a 10xx issue. In the opposite, esp. 30xx has more issues and of more kinds.
I ran all tests on a 3060, and did not encounter issues. Reference/Style/Composition/Face are working.
As for the 10xx cards, someone with the hardware needs to reproduce this directly in ComfyUI with a simple example workflow from the IP-Adapter nodes and report the full error there. I'm not familiar enough with that code to blindly guess where the issue might be.
If it worked before as you say it would also help to test different version of IP-Adapter nodes to find out when the issue was introduced.
Is the ip adaper involved in "fill" and did it change in 1.17 or 1,17. 1? I have the dtype error message first ime ever with fill this evening - für 1.5 and xl models. Getting annoying. Looks like i have to roll back to 1.16. Or change arguments.
It's used for fill/expand only if you don't provide a prompt (as a replacement of sorts)
To says it clearly: this is a new issue. All my dtypes error so far were concerning ip face adapter V2 (SDXL). Now i got the same with fill, both xl and 1.5. (Without text prompts, ip adapter only then, right?). Both work with: "server_arguments": "--dont-upcast-attention --force-fp16". 2+1 questions resulting.
plus: when changing sever arguments to "server_arguments": "--dont-upcast-attention --force-fp16" the json file opens automatically with a line "server_arguments": "null" beore it. Is this correct?
Need to see the full error trace from the server.log to say for sure, but since it's related to reference/face/style it is likely that it's an issue with IP-Adapter. And yes that has been updated. Style transfer is one of the new features.
plus: when changing sever arguments to "server_arguments": "--dont-upcast-attention --force-fp16" the json file opens automatically with a line "server_arguments": "null" beore it. Is this correct?
Don't really understand that part, "null"
would probably be bad, null
might be fine.
"null" is a copy and pace error, sry for that. It inserts a line with null, not with "null". So you changed both ip adapter and ip dace adapter? (Asking cause not sure, if face function needs not only face adapter, but the genereall ip adapter, too, for instance when text prompting.)
Here my server log from today, first use of fill feature with "server_arguments": "--dont-upcast-attention --force-fp16", second without.
2024-04-30 07:33:54,132 INFO Starting server 2024-04-30 07:33:54,132 INFO 2024-04-30 07:33:54,132 INFO To see the GUI go to: http://127.0.0.1:8188 2024-04-30 07:33:56,941 INFO got prompt 2024-04-30 07:33:57,258 INFO model_type EPS 2024-04-30 07:33:59,113 INFO Using pytorch attention in VAE 2024-04-30 07:33:59,113 INFO Using pytorch attention in VAE 2024-04-30 07:34:01,060 INFO clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] 2024-04-30 07:34:03,265 INFO Requested to load CLIPVisionModelProjection 2024-04-30 07:34:03,265 INFO Loading 1 new model 2024-04-30 07:34:03,883 INFO D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\ldm\modules\attention.py:345: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.) 2024-04-30 07:34:03,883 INFO out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False) 2024-04-30 07:34:07,256 INFO Requested to load SDXLClipModel 2024-04-30 07:34:07,256 INFO Loading 1 new model 2024-04-30 07:34:07,917 INFO Requested to load AutoencoderKL 2024-04-30 07:34:07,917 INFO Loading 1 new model 2024-04-30 07:34:11,587 INFO [ApplyFooocusInpaint] 960 Lora keys loaded, 0 remaining keys not found in model. 2024-04-30 07:34:11,649 INFO [comfyui-inpaint-nodes] Injecting patched comfy.model_patcher.ModelPatcher.calculate_weight 2024-04-30 07:34:11,649 INFO Requested to load SDXL 2024-04-30 07:34:11,649 INFO Loading 1 new model 2024-04-30 07:34:11,770 INFO loading in lowvram mode 2677.4278602600098 2024-04-30 07:34:17,128 INFO 0%| | 0/7 [00:00<?, ?it/s]D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\python\lib\site-packages\torchsde_brownian\brownian_interval.py:608: UserWarning: Should have tb<=t1 but got tb=14.614644050598145 and t1=14.614643. 2024-04-30 07:34:17,128 INFO warnings.warn(f"Should have {tb_name}<=t1 but got {tb_name}={tb} and t1={self._end}.") 2024-04-30 07:34:24,194 INFO 14%|█▍ | 1/7 [00:06<00:38, 6.35s/it] 14%|█▍ | 1/7 [00:10<01:01, 10.25s/it] 2024-04-30 07:34:28,222 INFO !!! Exception during processing !!! 2024-04-30 07:34:28,223 INFO Traceback (most recent call last): 2024-04-30 07:34:28,223 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\execution.py", line 151, in recursive_execute 2024-04-30 07:34:28,223 INFO output_data, output_ui = get_output_data(obj, input_data_all) 2024-04-30 07:34:28,223 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\execution.py", line 81, in get_output_data 2024-04-30 07:34:28,223 INFO return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) 2024-04-30 07:34:28,224 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\execution.py", line 74, in map_node_over_list 2024-04-30 07:34:28,224 INFO results.append(getattr(obj, func)(slice_dict(input_data_all, i))) 2024-04-30 07:34:28,224 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\nodes.py", line 1378, in sample 2024-04-30 07:34:28,224 INFO return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise) 2024-04-30 07:34:28,224 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\nodes.py", line 1314, in common_ksampler 2024-04-30 07:34:28,225 INFO samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, 2024-04-30 07:34:28,225 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\sample.py", line 37, in sample 2024-04-30 07:34:28,225 INFO samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) 2024-04-30 07:34:28,225 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\samplers.py", line 755, in sample 2024-04-30 07:34:28,226 INFO return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) 2024-04-30 07:34:28,226 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\samplers.py", line 657, in sample 2024-04-30 07:34:28,226 INFO return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) 2024-04-30 07:34:28,226 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\samplers.py", line 644, in sample 2024-04-30 07:34:28,226 INFO output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) 2024-04-30 07:34:28,227 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\samplers.py", line 623, in inner_sample 2024-04-30 07:34:28,227 INFO samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) 2024-04-30 07:34:28,227 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\samplers.py", line 534, in sample 2024-04-30 07:34:28,227 INFO samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) 2024-04-30 07:34:28,227 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context 2024-04-30 07:34:28,228 INFO return func(*args, kwargs) 2024-04-30 07:34:28,228 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\k_diffusion\sampling.py", line 707, in sample_dpmpp_sde_gpu 2024-04-30 07:34:28,228 INFO return sample_dpmpp_sde(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, eta=eta, s_noise=s_noise, noise_sampler=noise_sampler, r=r) 2024-04-30 07:34:28,228 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context 2024-04-30 07:34:28,229 INFO return func(*args, *kwargs) 2024-04-30 07:34:28,229 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\k_diffusion\sampling.py", line 559, in sample_dpmpp_sde 2024-04-30 07:34:28,229 INFO denoised_2 = model(x_2, sigma_fn(s) s_in, extra_args) 2024-04-30 07:34:28,229 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\samplers.py", line 272, in call 2024-04-30 07:34:28,229 INFO out = self.inner_model(x, sigma, model_options=model_options, seed=seed) 2024-04-30 07:34:28,230 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\samplers.py", line 610, in call 2024-04-30 07:34:28,230 INFO return self.predict_noise(*args, kwargs) 2024-04-30 07:34:28,230 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\samplers.py", line 613, in predict_noise 2024-04-30 07:34:28,230 INFO return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) 2024-04-30 07:34:28,230 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\samplers.py", line 258, in sampling_function 2024-04-30 07:34:28,231 INFO out = calc_cond_batch(model, conds, x, timestep, model_options) 2024-04-30 07:34:28,231 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\samplers.py", line 218, in calc_cond_batch 2024-04-30 07:34:28,231 INFO output = model.apply_model(inputx, timestep, c).chunk(batch_chunks) 2024-04-30 07:34:28,231 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\model_base.py", line 97, in apply_model 2024-04-30 07:34:28,231 INFO model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, extra_conds).float() 2024-04-30 07:34:28,232 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\python\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl 2024-04-30 07:34:28,232 INFO return self._call_impl(*args, *kwargs) 2024-04-30 07:34:28,232 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\python\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl 2024-04-30 07:34:28,232 INFO return forward_call(args, kwargs) 2024-04-30 07:34:28,232 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 850, in forward 2024-04-30 07:34:28,233 INFO h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) 2024-04-30 07:34:28,233 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed 2024-04-30 07:34:28,233 INFO x = layer(x, context, transformer_options) 2024-04-30 07:34:28,233 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\python\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl 2024-04-30 07:34:28,234 INFO return self._call_impl(*args, kwargs) 2024-04-30 07:34:28,234 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\python\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl 2024-04-30 07:34:28,234 INFO return forward_call(*args, *kwargs) 2024-04-30 07:34:28,234 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\ldm\modules\attention.py", line 633, in forward 2024-04-30 07:34:28,234 INFO x = block(x, context=context[i], transformer_options=transformer_options) 2024-04-30 07:34:28,235 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\python\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl 2024-04-30 07:34:28,235 INFO return self._call_impl(args, kwargs) 2024-04-30 07:34:28,235 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\python\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl 2024-04-30 07:34:28,235 INFO return forward_call(*args, *kwargs) 2024-04-30 07:34:28,235 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\ldm\modules\attention.py", line 460, in forward 2024-04-30 07:34:28,236 INFO return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint) 2024-04-30 07:34:28,236 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 191, in checkpoint 2024-04-30 07:34:28,236 INFO return func(inputs) 2024-04-30 07:34:28,236 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\ldm\modules\attention.py", line 557, in _forward 2024-04-30 07:34:28,236 INFO n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) 2024-04-30 07:34:28,237 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\CrossAttentionPatch.py", line 161, in call 2024-04-30 07:34:28,237 INFO out_ip = optimized_attention(q, ip_k, ip_v, extra_options["n_heads"]) 2024-04-30 07:34:28,237 INFO File "D:!!Win7Users\js\AppData\Roaming\krita\ai_diffusion\server\ComfyUI\comfy\ldm\modules\attention.py", line 345, in attention_pytorch 2024-04-30 07:34:28,237 INFO out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False) 2024-04-30 07:34:28,237 INFO RuntimeError: Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead. 2024-04-30 07:34:28,238 INFO 2024-04-30 07:34:28,286 INFO Prompt executed in 31.27 seconds
Hardware / OS: GTX 1070, X470 Chipset, Ryzen 7 2700x, Win11
Just for general info, not due to actual intentions: Is it possible to have the new and the oold adapter in one directory and chose manually which one to work with? Or are there some config settings that oblige to use the new one with 1.17.1?
So you changed both ip adapter and ip dace adapter?
They are different models (those were not changed) but are both handled by the same extension nodes and share a lot of code. So it's not possible to update them independently.
Is it possible to have the new and the oold adapter in one directory and chose manually which one to work with?
It's possible to some extent. But a while ago IP-adapter nodes significantly changed their interface, and version 1.16+ of the plugin is not compatible with the old one.
Hi Acly! Could you discover any thing in my logfile?
It just confirmed that it's a disagreement between IP-adapter and the base ComfyUI sampler.
sounds like a bug, question is on which level
edit: same with reference, randomly working or not. Always worked in previous versions.
on GTX 1070: I had the Error (see refrence line) in 16.1 with face adapter. First i solveed it with forced FP16, then i discovered, it works without forced fp when GPU settings by switching from low to medium cuda. (Was on low following some youtube tips.)
In 17. 0 none of both ways work anymore, neither for face adapter nor for the new style transfer. Having worked face before seems to indicate a solvable bug - and lets me hope that it can repair style transfer too. (Was very hot to try and very dissapointed now.)
Nevertheless great work, acly, and incredible speed of development! (btw. where can i see,wwhich samplers are defactav ailable?) thx.