Open chunniunai220ml opened 5 years ago
Good question! 1) You can follow the data/voc0712.py to customize your dataloader. 2) It's OK to finetune from the published model, but you have to completely align the model size before loading the weight, you have to modify some code in utils/core.py.
Hi, Thx for sharing your code. I've finetuned in your published model in my own dataset.I set the dataset in a VOC2007 format and I have set the num_class = 7 instead of 81. And I think relevant parameter are set properly But the result is so bad! I test in your image and my own image, there are many bounding box(more than 1K).I am so confused. Could you tell me what's the problem?
Hi, Thx for sharing your code. I've finetuned in your published model in my own dataset.I set the dataset in a VOC2007 format and I have set the num_class = 7 instead of 81. And I think relevant parameter are set properly But the result is so bad! I test in your image and my own image, there are many bounding box(more than 1K).I am so confused. Could you tell me what's the problem?
i met the same problem, very confusing.
Facing exact same issue.
How do your losses look like during training? Did you check the results of test.py
on some test data that you have?
Facing exact same issue.
How do your losses look like during training? Did you check the results of
test.py
on some test data that you have?
In my experiment,the console will print loss as default. The trainning setting is the same as the project(I don't change). The loss_L is about 3.237 after trainning and loss_C drop from 11 to about 3. Result(runing on test.py) on my test data is so bad too(so many bounding box)!
Well, it seems that your training has still not converged, as those values are pretty high. In my case, I have Loss_L = 1.28
and Loss_C=0.4
but still lots of FP bounding boxes.
Hi, sorry for late reply. @Roujack, your loss values with only 7 categories are not so correct. For example, the stable VOC(with 20 categories) losses are about loss_l: 1.2-1.4, while loss_c: 0.3-0.5. @dshahrokhian may have a more stable training process.
The reason that have so many FP bboxes, I guess you can: 1) check the GT label: 1 - k, not 0 - k-1. There is a BG class. 2) Visualize the training images, if same as the val image, you should check the pre-process or post-process. If much better than the val images, you should check whether the training process is overfit. Because I don't how scale is your dataset, the training process needs to be tuned.
@qijiezhao ,Thanks for your sharing great object about objective detection. After 12 epochs on COCO, get the results as follow: About how many epochs you train your weights?
@qijiezhao ,Thanks for your sharing great object about objective detection. After 12 epochs on COCO, get the results as follow: About how many epochs you train your weights?
Refer this: https://github.com/qijiezhao/M2Det/blob/master/configs/m2det512_vgg.py#L27
@qijiezhao problem is still unresolved== here is some screenshots. number of class setting:
loss in 10 epoches(the dataset has 6464 images). It seems normal now: test on trainning image(so bad!):
@Roujack Are you using pytorch==0.4.1
for both training and testing? That partially solved the problem for me, still trying to figure out how to improve results further.
@dshahrokhian pytorch version is 0.4.1.
I see that you have a lot of bounding boxes with low confidences. Did you try increasing the threshold in demo.py
function draw_detection
? Something like 0.9.
@dshahrokhian yeah. As I increase the score threshold with 0.9 in demo.py function draw_detection, and set the num_per_class = 5 in configs/m2det512vgg.pth, the boundingbox number is decrease. But the bounding box is not locate object well and the classification labels are wrong: ![6811553607441 pic](https://user-images.githubusercontent.com/20369575/55001607-efd5ab80-500f-11e9-88ec-e80607787b36.jpg) I train on my customized dataset in 10 epoch and the loss_l = 1.2 and loss_c = 0.3. what do you think I can improve the detector behaviour?
I met same issue.
hi,when i finetuning,i got loss_l=0.6,and loss_c=1.2...the loss_c is big.. i have change the code about calss as your screenshots,have any where should be change? Thanks @Roujack @qijiezhao
hi @dshahrokhian , i got loss_l=0.63,and loss_c=0.8, VOC0712 I can get a good result,mAP reach 82%, but when train my dateset(14000samples), the test result is very bad....do you think overfitting? or any suggset?
hi @dshahrokhian , i got loss_l=0.63,and loss_c=0.8, VOC0712 I can get a good result,mAP reach 82%, but when train my dateset(14000samples), the test result is very bad....do you think overfitting? or any suggse
hi @dshahrokhian , i got loss_l=0.63,and loss_c=0.8, VOC0712 I can get a good result,mAP reach 82%, but when train my dateset(14000samples), the test result is very bad....do you think overfitting? or any suggset?
请问我在voc0712上跑了160个epoch,map只有65%,程序只改了batchsize,num_class还有VOC_CLASSES,想知道你具体怎么训练到82%的
hi @dshahrokhian , i got loss_l=0.63,and loss_c=0.8, VOC0712 I can get a good result,mAP reach 82%, but when train my dateset(14000samples), the test result is very bad....do you think overfitting? or any suggse
hi @dshahrokhian , i got loss_l=0.63,and loss_c=0.8, VOC0712 I can get a good result,mAP reach 82%, but when train my dateset(14000samples), the test result is very bad....do you think overfitting? or any suggset?
请问我在voc0712上跑了160个epoch,map只有65%,程序只改了batchsize,num_class还有VOC_CLASSES,想知道你具体怎么训练到82%的
hi @dshahrokhian , i got loss_l=0.63,and loss_c=0.8, VOC0712 I can get a good result,mAP reach 82%, but when train my dateset(14000samples), the test result is very bad....do you think overfitting? or any suggset?
用了coco的预训练吗
Any progress here?
@qijiezhao
I see this:
It's OK to finetune from the published model, but you have to completely align the model size before loading the weight, you have to modify some code in utils/core.py.
How to modify the size ,and which codes have to modify in utils/core.py, would you please to help me? Thanks very much
嗨@dshahrokhian,我得了loss_l = 0.63,而loss_c = 0.8,VOC0712我可以得到一个好结果,mAP达到82%,但是当训练我的日期组(14000samples)时,测试结果非常糟糕....你觉得吗?过度拟合?或任何suggset?
Hello, I can only run 72.5% of mAP on the pascal voc2007 dataset. How do you adjust it?
I have the same issue of so many bounding box :(
Has everybody solved the problem?My loss_l can drop but loss_c can't
@zhulei1228 I remember somebody mention weighting loss_l and loss_c. Maybe you should give it a try?
Thank for your sharing! I want to konw how to prepare the custom train_val data, the data format is img_path x_min,y_min,x_max,y_max,cls ?
besides,how can i finetune on your published model in a custom dataset? or I'd better train from scratch
did you modify the model for custom data ??
is it work fine ??
Thanking You !!
Thank for your sharing! I want to konw how to prepare the custom train_val data, the data format is img_path x_min,y_min,x_max,y_max,cls ?
besides,how can i finetune on your published model in a custom dataset? or I'd better train from scratch