ApplyPulidFlux
No operator found for memory_efficient_attention_forward with inputs:
query : shape=(1, 577, 16, 64) (torch.bfloat16)
key : shape=(1, 577, 16, 64) (torch.bfloat16)
value : shape=(1, 577, 16, 64) (torch.bfloat16)
attn_bias : <class 'NoneType'>
p : 0.0
ck_decoderF is not supported because:
device=mps (supported: {'cuda'})
bf16 is only supported on A100+ GPUs
operator wasn't built - see python -m xformers.info for more info
ckF is not supported because:
device=mps (supported: {'cuda'})
bf16 is only supported on A100+ GPUs
operator wasn't built - see python -m xformers.info for more info
ApplyPulidFlux No operator found for
memory_efficient_attention_forward
with inputs: query : shape=(1, 577, 16, 64) (torch.bfloat16) key : shape=(1, 577, 16, 64) (torch.bfloat16) value : shape=(1, 577, 16, 64) (torch.bfloat16) attn_bias : <class 'NoneType'> p : 0.0ck_decoderF
is not supported because: device=mps (supported: {'cuda'}) bf16 is only supported on A100+ GPUs operator wasn't built - seepython -m xformers.info
for more infockF
is not supported because: device=mps (supported: {'cuda'}) bf16 is only supported on A100+ GPUs operator wasn't built - seepython -m xformers.info
for more info