Open ravikantb opened 8 years ago
where did you change it ? I am do the same thing too, I set the first conv layer's stride from 2 to 1, but things get very bad in final testing mAP, from 58 to 10, I think some where is wrong...
The paper mentions that:
On the re-scaled images, the total stride for both ZF and VGG nets on the last convolutional layer is 16 pixels, and thus is ∼10 pixels on a typical PASCAL image before resizing (∼500×375). Even such a large stride provides good results, though accuracy may be further improved with a smaller stride.
But the only place where I could 16 stride was in ROI proposal layer named 'proposal'. I tried to play with it and didn't get good results. Do let me know what kind of results you get if you change it :)
Hi @ravikantb @fateleak , did you get any logical way to choose strides for a particular network?
@janismdhanbad : sorry but I change this 'stride' parameter based on the dataset I am working with. Its more of intuition based which comes from playing with more and more datasets and failing and trying time and again :) If your objects are small (not covering a lot of image) and sparse (spaced across image, not closely packed) then I suggest you to try longer strides. Will speed up the processing as well. Good luck, let us know what approach worked for your work.
Hi,
As per the paper, 'accuracy may be further improved with a smaller stride'. I changed the default 16 stride during anchor generation to 5 pixels. But it decreased the overall object detection accuracy. Does anyone have any idea as to what might cause this? I have trained my model on a single object and my test images also have many instances of that object only albeit in different sizes.