Closed 1451595897 closed 5 years ago
@1451595897 What is the end goal of using mobilenet_v2 ?
Yolov3 as outputs vectors at 3 moments in the detection process while mobilenet will product a single vector of outputs after the conv layers. The model's architectures can't just be swaped out like that for an other one.
If your goal if to have a faster/smaller model, take a look at tiny-yolo !
@1451595897 as @Ownmarc said, I would not recommend using mobilenet or mobilenet_v2 as a backbone. I've used SSD mobilenet before and was not impressed with the results, YOLOv3 should outperform it in most situations.
If you are looking for speed and accuracy you can run YOLOv3-SPP at 320 resolution, which is still rated at 0.52 mAP on COCO. This is what we use in the iDetection iOS App.
If you are prioritizing speed above all else, then YOLOv3-tiny is about 10 times smaller than full YOLOv3. This will run at up to 300 FPS on a V100.
Hello, I would like to ask you, if I want to replace the darknet53 with mobilenet or mobilenet_v2 in your project, in addition to modifying the cfg file and importing a new pre-training model, I don't know where else to modify? Looking forward to your suggestions and responses!
I also want to change backbone to mobilenet recently. I don't know if you have been successful before. Can you share your code with me? thank you
Hi sir where can I find the code for replaced backbone from darknet 53 to mobilenet??
@AravindS1306 the backbone replacement from darknet53 to mobilenet isn't directly supported in the YOLOv3 repository. However, there are various implementations available online that demonstrate how to integrate mobilenet as a backbone. You can refer to those implementations or seek guidance from the YOLO community for specific assistance. Good luck with your implementation!
Hello, I would like to ask you, if I want to replace the darknet53 with mobilenet or mobilenet_v2 in your project, in addition to modifying the cfg file and importing a new pre-training model, I don't know where else to modify? Looking forward to your suggestions and responses!