Open Xiangyu-CAS opened 6 years ago
@Xiangyu-CAS would you mind sharing your result of your experiment?
@Ai-is-light , ResNet50 (conv4,5 dilation, bn_istrain) +2tstage : 55% (2644 images) ResNet50 (conv4,5 dilation, bn_istrain) +6tstage : 55% (2644 images) ResNet50 (conv4,5 dilation, bn_freeze) + N stage: failed <10%
DenseNet121(conv4,5 dilation, bn_freeze) +2tstage : 54%
DenseNet121(conv4,5 dilation, bn_freeze) +6tstage : 54%
DenseNet121(conv4,5 dilation, bn_istrain) +6tstage : 54%
would you mind sharing yours ?
@Xiangyu-CAS I haven't try the ResNet and DenseNet , but I will do it. I have tried the original network changed by the dilated work. In my work , I need pay more attention to the speed. However I failed when I try to use the BN-layer at the stage of 2-6. In your work, ResNet50 (conv4,5 dilation, bn_istrain) +2tstage : 55% (2644 images), which means you did use bn-layer in training including every layer of stage2-6 ??? I trained the original network and got coco-2644 about 0.584(mAP), and dilate-conv about 0.543 (mAP). And , now, I try to use the mobileNet to train more faster model. So,
Is the performance you reported obtained on COCO2017 or COCO2014?
@Xiangyu-CAS I haven't try the ResNet and DenseNet , but I will do it. I have tried the original network changed by the dilated work. In my work , I need pay more attention to the speed. However I failed when I try to use the BN-layer at the stage of 2-6. In your work, ResNet50 (conv4,5 dilation, bn_istrain) +2tstage : 55% (2644 images), which means you did use bn-layer in training including every layer of stage2-6 ??? I trained the original network and got coco-2644 about 0.584(mAP), and dilate-conv about 0.543 (mAP). And , now, I try to use the mobileNet to train more faster model. So,
mAP/coco-dataset
mAP/coco-dataset
Hi, did you use mobileNet to train the model? I have tried it for a long time, I want to ask if you change the cropsize to 128 to train this model? How about the performance? Thanks
Hi, recently, I have carried out some experiment based on DenseNet backbone. In order to produce feature map at stride 8, dilation or a trous trick was used.
However, the results are really confusing. Most of the refinement stages were not working. In original implementation, loss reduce from stage 1 to stage 6, but in my experiment with dilation, loss stay constant from stage 3 to stage 6.
Did you carry out any experiment using dilation and met the same issue ?
Original:
DenseNet + dilation