Qiyuan-Ge / PaintMind

Fast and controllable text-to-image model.
Apache License 2.0
40 stars 5 forks source link

xformers error: NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs #8

Open JunZhan2000 opened 1 year ago

JunZhan2000 commented 1 year ago

NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(8, 1024, 1, 64) (torch.float32) key : shape=(8, 1024, 1, 64) (torch.float32) value : shape=(8, 1024, 1, 64) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 cutlassF is not supported because: device=cpu (supported: {'cuda'}) flshattF is not supported because: device=cpu (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) tritonflashattF is not supported because: device=cpu (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 unsupported embed per head: 64

Hello, thank you very much for your work. After I install xformers, I get the error above. My server has A800 graphics card, I tried from 0.0.16 to the latest version, but did not solve this problem, can you help me look at this problem, Or can you tell me exactly what version of the environment you're using? I have searched all over the Internet

JunZhan2000 commented 1 year ago

And I found that there was no such problem in training, and errors in infering.

cipolee commented 1 year ago

same in training how can i turn off the xformers

JunZhan2000 commented 1 year ago

same in training how can i turn off the xformers

Uninstalling xformers can be solved, but this does not use flash attention.

cipolee commented 1 year ago

thanks!