aigc-apps / EasyAnimate

📺 An End-to-End Solution for High-Resolution and Long Video Generation Based on Transformer Diffusion
Apache License 2.0
652 stars 41 forks source link

当我运行 app.py时,出现 bf16相关的错误,该如何修改? #11

Open zhanghongyong123456 opened 1 month ago

zhanghongyong123456 commented 1 month ago

Error. error information is No operator found for memory_efficient_attention_forward with inputs: query : shape=(1536, 1008, 1, 72) (torch.bfloat16) key : shape=(1536, 1008, 1, 72) (torch.bfloat16) value : shape=(1536, 1008, 1, 72) (torch.bfloat16) attn_bias : <class 'NoneType'> p : 0.0 decoderF is not supported because: attn_bias type is <class 'NoneType'> bf16 is only supported on A100+ GPUs flshattF@v2.3.6 is not supported because: requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) bf16 is only supported on A100+ GPUs tritonflashattF is not supported because: requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) bf16 is only supported on A100+ GPUs operator wasn't built - see python -m xformers.info for more info triton is not available requires GPU with sm80 minimum compute capacity, e.g., A100/H100/L4 cutlassF is not supported because: bf16 is only supported on A100+ GPUs smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 dtype=torch.bfloat16 (supported: {torch.float32}) has custom scale bf16 is only supported on A100+ GPUs unsupported embed per head: 72

zouxinyi0625 commented 1 month ago

We only support using bf16 for inference. Please use the specific machine, like a10, a100, etc. If you don't have locally, you can play with EasyAnimate on cloud: https://gallery.pai-ml.com/#/preview/deepLearning/cv/easyanimate. We provide free gpu time of A10 for new user of PAI, read the instruction carefully to receive the free gpu time.