Closed jsg921019 closed 7 months ago
Hi, thank you for expressing your interest. Currently, we have return_att_masks
set to False
as Flash Attention does not yet support attention masks (check it here). However, if speed and memory usage are not primary concerns for your application, you may opt to set return_att_masks
to True
. It's worth noting that during our learning process, we had this option enabled. Hope it helps!
Thank you for precise and fast feedback!
Thank you for sharing interesting work.
I have question about Instance-Masked Attention. Current code does not seems to apply Instance-Masked Attention. (return_att_masks = False) Is this because not applying Instance-Masked Attention is better in generation quality?
Secondly, is Instance-Masked Attention applied when training? Or is this only applied when inference?
Thank you in advance.