Closed lufi1 closed 3 years ago
@lufi1 Reducing the size of feature maps as early as possible is the easiest way to decrease the inference latency. As we know, reducing the size of feature maps will harm the detection of small objects. So our direction is to reducing the size while keeping the accuracy as high as possible. Of cause, you can try MobileNet or other light backbones. The code is easy to modify and play with if you are familiar with MXNet.
Hi,
First of all, thank you for the project, it is super good and useful.
In the face detection repo, you say that you'll try :
Do you have any ideas about when this will be ready or how I could do it myself.
Many thanks