Open zhanghongyong123456 opened 1 month ago
We only support using bf16 for inference. Please use the specific machine, like a10, a100, etc. If you don't have locally, you can play with EasyAnimate on cloud: https://gallery.pai-ml.com/#/preview/deepLearning/cv/easyanimate. We provide free gpu time of A10 for new user of PAI, read the instruction carefully to receive the free gpu time.
Error. error information is No operator found for
memory_efficient_attention_forward
with inputs: query : shape=(1536, 1008, 1, 72) (torch.bfloat16) key : shape=(1536, 1008, 1, 72) (torch.bfloat16) value : shape=(1536, 1008, 1, 72) (torch.bfloat16) attn_bias : <class 'NoneType'> p : 0.0decoderF
is not supported because: attn_bias type is <class 'NoneType'> bf16 is only supported on A100+ GPUsflshattF@v2.3.6
is not supported because: requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) bf16 is only supported on A100+ GPUstritonflashattF
is not supported because: requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) bf16 is only supported on A100+ GPUs operator wasn't built - seepython -m xformers.info
for more info triton is not available requires GPU with sm80 minimum compute capacity, e.g., A100/H100/L4cutlassF
is not supported because: bf16 is only supported on A100+ GPUssmallkF
is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 dtype=torch.bfloat16 (supported: {torch.float32}) has custom scale bf16 is only supported on A100+ GPUs unsupported embed per head: 72