FlagOpen / FlagAttention

A collection of memory efficient attention operators implemented in the Triton language.
Other
201 stars 15 forks source link

Is it possible for your team to implement xformers.ops.memory_efficient_attention? #24

Open radna0 opened 1 month ago

iclementine commented 1 month ago

Thank you. I think the main difference between our implementation of flash_attn is that it takes an extra input, the attention bias. We can take a while to add this feature.

radna0 commented 1 month ago

Thank you @iclementine! Will it take a long time for you to implement this? I'm trying to run this on AMD GPUs and have had some success with HIP backend of Triton. Do you think it's possible to run on both Nvidia and AMD? What about performance difference?

iclementine commented 1 month ago

I don't have a AMD GPU. Maybe there are some issues to run it on triton with other backends(some configs exceeding resource limits, some passed are not supported, etc). If you have some modifications to make it run on AMD GPUS, please inform us. Thank you.

I would take about 1~2 weeks to implement this, regarding my current plans.

radna0 commented 1 month ago

Hi @iclementine. Were you able to implement it? I will check on the configuration for AMD GPUs

iclementine commented 1 month ago

Sorry for that, I was working on other projects and will be occupied recently.