NVlabs / SegFormer

Official PyTorch implementation of SegFormer
https://arxiv.org/abs/2105.15203
Other
2.36k stars 332 forks source link

k v compression in attention may causes small targets and detail to be lost? #132

Open swjtulinxi opened 9 months ago

swjtulinxi commented 9 months ago

In attention, to reduce the calculation amount, kv is compressed, small targets and detail will lose ?