Closed revilokeb closed 6 years ago
The model of peleenet_inet_acc7243.caffemodel is trained on imageNet ILSVRC 2012 and is used to initialize the weights of the object detection model. To run eval_voc.py, you should download the pre-trained model from the link of "07+12" or "07+12+coco".
I am trying use pelee-net for the COCO detection task. Mobilenet reported about 20 to 21% mAP for 81 classes.
@Robert-JunWang There is a "train_coco.py" file, can i use that without modification? (I started training and it seems to work). Also do you have coco detection results or model?
@revilokeb Can you share your retained model and logfile (value of loss=1.81 and detection_eval = 0.705)? This would be good reference for my training.
Thanks all!
The "train_coco.py" file can work, but the default hyperparameters are from the original SSD and are not optimized for Pelee. The result on COCO test_dev with these default parameters is 22.2%. The detail information can be seen on the page of https://competitions.codalab.org/competitions/5181#results. I do not have the computing resource to finetune parameters on COCO. If you get the better results, please do not hesitate to let me know. Thanks.
BTW, The result of MobileNet+SSD is reported with the resolution of 600x600. Pelee is trained with 304x304.
On Mon, Mar 19, 2018 at 11:03 AM, hengck23 notifications@github.com wrote:
I am trying use pelee-net for the COCO detection task. Mobilenet reported about 20 to 21% mAP for 81 classes.
@Robert-JunWang https://github.com/robert-junwang There is a "train_coco.py" file, can i use that without modification? (I started training and it seems to work). Also do you have coco detection results or model?
@revilokeb https://github.com/revilokeb Can you share your retained model and logfile (value of loss=1.81 and detection_eval = 0.705)? This would be good reference for my training.
Thanks all!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Robert-JunWang/Pelee/issues/1#issuecomment-374088012, or mute the thread https://github.com/notifications/unsubscribe-auth/AG20rQkBKYbiTKT6StMqeyJHJ_yKa1thks5tfyAFgaJpZM4Sl0KB .
@Robert-JunWang Thanks for the information. After I finished my training, i will report on my results.
If you want, I can help you to finetune your current model on coco. Any prototxt, log or trained weights will be useful for me.
Thanks for your great work!
@Robert-JunWang I see, so I understand the the actual model trained on Pascal VOC is available at the specified link, I will try this out later, many thanks! @hengck23 yes I could provide you with my training on Pascal VOC but the original model is giving even slightly better results, why not taking that one?
Hi @Robert-JunWang, Thank you for your contribution. I tried to use the VOC / VOC+COCO model for evaluation, but I found out that you might provided the wrong download link for those model files (the links are the same as the ImageNet pretrained one). Could you please check it and provide the correct one? Thanks a lot!
Have updated. Thank you for telling me about that.
On Fri, Mar 23, 2018 at 5:41 AM, Chien-Yi Wang notifications@github.com wrote:
Hi @Robert-JunWang https://github.com/robert-junwang, Thank you for your contribution. I tried to use the VOC / VOC+COCO model for evaluation, but I found out that you might provided the wrong download link for those model files (the links are the same as the ImageNet pretrained one). Could you please check it and provide the correct one? Thanks a lot!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Robert-JunWang/Pelee/issues/1#issuecomment-375466947, or mute the thread https://github.com/notifications/unsubscribe-auth/AG20rWNTxYPJqUsZKADLWuQ2jkdvA5bXks5thBqBgaJpZM4Sl0KB .
Hi @Robert-JunWang, Thank you for your update! I just tried to test some images using your trained model on VOC by a testing script similar to https://github.com/weiliu89/caffe/blob/ssd/examples/ssd_detect.ipynb (I replaced the model_def and model_weights in the example) However, it is getting very bad result on most of the images, even on VOC training dataset. Could you please double check the model or provide the script which you used for testing one single image? Thank you!
The preprocessing part is different with the original SSD. You should set the scale and set the mean with the value used in test.prototxt and train.prototxt
scale: 0.017 mean_value: 103.94 mean_value: 116.78 mean_value: 123.68
On Fri, Mar 23, 2018 at 11:44 AM, Chien-Yi Wang notifications@github.com wrote:
Hi @Robert-JunWang http:///Robert-JunWang, Thank you for your update! I just tried to test some images using your trained model on VOC by a testing script similar to https://github.com/weiliu89/ caffe/blob/ssd/examples/ssd_detect.ipynb (I replaced the model_def and model_weights in the example) However, it is getting very bad result on most of the images, even on VOC training dataset. Could you please double check the model or provide the script which you used for testing one single image? Thank you!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Robert-JunWang/Pelee/issues/1#issuecomment-375763810, or mute the thread https://github.com/notifications/unsubscribe-auth/AG20rdoEFqdutTZMbbsCo08Bdg_ZdXMzks5thUKZgaJpZM4Sl0KB .
Hi @Robert-JunWang , Thanks for the reply. The problem was caused by the scale factor (0.017). Now the result seems pretty good! Would you mind to mention a bit about your reason to scale the image by 0.017? Thanks!
The classification model is trained by PyTorch with the default preprocessing setting. This preprocessing is similar to what is used on Caffe now (scale the image by 0.017 and subtract mean values).
On Mon, Mar 26, 2018 at 2:07 PM, Chien-Yi Wang notifications@github.com wrote:
Hi @Robert-JunWang https://github.com/Robert-JunWang , Thanks for the reply. The problem was caused by the scale factor (0.017). Now the result seems pretty good! Would you mind to mention a bit about your reason to scale the image by 0.017? Thanks!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Robert-JunWang/Pelee/issues/1#issuecomment-376056302, or mute the thread https://github.com/notifications/unsubscribe-auth/AG20rbmEGpB_jf71CQxI3hb-5CGD2ucAks5tiIWlgaJpZM4Sl0KB .
First of all many thanks for providing the code, greatly appreciated!!
I have run your evaluation code on your pre-trained model, i.e.
python examples/pelee/eval_voc.py --weights=models/pelee/peleenet_inet_acc7243.caffemodel
which is giving to my surprise a loss value of loss=29.92 and detection_eval = 0.002 on the Pascal VOC validation data.
I have then retrained the model for 120,000 iterations and I am obtaining a loss value of loss=1.81 and detection_eval = 0.705, so fairly close to what you have published.
Would you maybe so kind to look once more at the pre-trained model at https://drive.google.com/file/d/1OBzEnD5VEB_q_B8YkLx-i3PMHVO-wagk/view?usp=sharing? Does that model give you good validation?