Closed henniekim closed 3 years ago
Hello !
The backbone used in this paper is heavy which leads to the slower inference time. Have you ever tried lighter backbone such as Mobile Deeplab ?
I'd like to figure out how much performance degrade when using lighter backbone network.
Thanks a lot :)
Hi and thanks for the interest!
I haven't tried using the Mobile Deeplab (as it wasn't released on torchvision at the time of development), but it definitely is possible. I would guess somewhere between 2-5% AP.
Hello !
The backbone used in this paper is heavy which leads to the slower inference time. Have you ever tried lighter backbone such as Mobile Deeplab ?
I'd like to figure out how much performance degrade when using lighter backbone network.
Thanks a lot :)