frank-xwang / InstanceDiffusion

[CVPR 2024] Code release for "InstanceDiffusion: Instance-level Control for Image Generation"
https://people.eecs.berkeley.edu/~xdwang/projects/InstDiff/
Apache License 2.0
484 stars 25 forks source link

question about Instance-Masked Attention #6

Closed jsg921019 closed 7 months ago

jsg921019 commented 7 months ago

Thank you for sharing interesting work.

I have question about Instance-Masked Attention. Current code does not seems to apply Instance-Masked Attention. (return_att_masks = False) Is this because not applying Instance-Masked Attention is better in generation quality?

Secondly, is Instance-Masked Attention applied when training? Or is this only applied when inference?

Thank you in advance.

frank-xwang commented 7 months ago

Hi, thank you for expressing your interest. Currently, we have return_att_masks set to False as Flash Attention does not yet support attention masks (check it here). However, if speed and memory usage are not primary concerns for your application, you may opt to set return_att_masks to True. It's worth noting that during our learning process, we had this option enabled. Hope it helps!

jsg921019 commented 7 months ago

Thank you for precise and fast feedback!