balazik / ComfyUI-PuLID-Flux

PuLID-Flux ComfyUI implementation
Apache License 2.0
377 stars 27 forks source link

No operator found for `memory_efficient_attention_forward` with inputs: #44

Open buy3601223 opened 2 weeks ago

buy3601223 commented 2 weeks ago

No operator found for memory_efficient_attention_forward with inputs: query : shape=(1, 577, 16, 64) (torch.bfloat16) key : shape=(1, 577, 16, 64) (torch.bfloat16) value : shape=(1, 577, 16, 64) (torch.bfloat16) attn_bias : <class 'NoneType'> p : 0.0 decoderF is not supported because: attn_bias type is <class 'NoneType'> bf16 is only supported on A100+ GPUs flshattF@v2.5.6 is not supported because: requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) bf16 is only supported on A100+ GPUs cutlassF is not supported because: bf16 is only supported on A100+ GPUs smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 dtype=torch.bfloat16 (supported: {torch.float32}) has custom scale bf16 is only supported on A100+ GPUs unsupported embed per head: 64

taal099 commented 1 week ago

I'm having this issue too, but trying everything doesn't fix it well。 The GGUF model can be used with pulid workflow, but the FP8 model will be incorrect, : "requires device with capability > (8, 0) but your GPU has capability (7, 5)".

pytorch version:2.2.2+cu118 xformers version:0.0.25.post1+cu118 Set vram state tO:NORMAL VRAM

buy3601223 commented 1 week ago

公司最初由4位国内首批电商网站建设卖家发掘市场需求,阖力创办,旨在解决互联网平台销售中遇到的运营难题。公司专注于互联网营销推广,主营业务涵盖店铺整体托管、网店装修、精准营销,视觉设计、客服外包、产品摄影、软件开发、培训咨询与微社群营销,网站建设推广,依托大数据优势,帮助企业快速建立全新的销售系统,助力企业转型提速,开启“互联网+”时代的新商业未来。