Closed MartinSmeyer closed 1 year ago
It is a pleasure to see that you are interested in our work. DeAOTT has proven to be highly efficient in terms of image encoding, feature propagation, and mask decoding. If you aim to further accelerate DeAOTT, please consider the following recommendations:
Thanks for your pointers @z-x-yang, much appreciated!
Hey @z-x-yang,
awesome repository, thank you! :)
The DeAOTT model is performing really well, so I was wondering if we could further trim down the propagation/LSTT modules as they take most of the runtime and memory?
Do you think the network would require retraining, or could we just prune?
It seems the -T models reduce to
https://github.com/yoxu515/aot-benchmark/blob/f9a62b60e12218019c0617efa1315dcc73147cdb/configs/models/default_deaot.py#L14-L15 https://github.com/yoxu515/aot-benchmark/blob/f9a62b60e12218019c0617efa1315dcc73147cdb/networks/layers/transformer.py#L138-L154
Which parameters would you tweak to make the modules even more efficient without losing too much performance?
Thanks a lot!