Open ReneWu1117 opened 6 years ago
You need to also change lib/layer_utils/snippets.py which has functions generate_anchors_pre_tf and generate_anchors_pre which calls the function generate_anchors without the feat_stride argument. Please change that and add feat_stride[0] as the first argument in calling function generate_anchors. Without this the feat_stride defaults to 16. Hope it might work for you now.
@rnsandeep Thanks for your advice. I know your mean, the basesize
of anchor better match the feat_stride. In fact, I ignored to note that I changed the basesize
manually. So there must be other reasons.
Thanks anyway!
I have got the same problem, so i am using previous layer features instead of the layer you use generally in image_to_head, making the feature dimension double the size. In that case it's working but i am yet to see the accuracy.
I have change the feature stride to 8 and added an deconv layer after net_conv ( image_to_head) to double the size of feature map. The original feature map was intended only for feature stride of 16. In order to use 8 you have to double the feature map size. Instead of removing the net_conv layer you can add deconv layer to make it work.
@rnsandeep Thanks! I use resnet, so I change _build_base
function in resnet_v1.py:
net = slim.max_pool2d(net, [3, 3], stride=2, padding='VALID', scope='pool1')
to
net = slim.max_pool2d(net, [3, 3], stride=1, padding='VALID', scope='pool1')
which also doubles the feature map size, and the code run without errors.
But I am not sure whether or not it is a good solution.
@ReneWu1117 I would suggest to add this after net_conv instead of doing it the build_base function which are mostly initial layers of networks. I have added slim.layers.conv2d_transpose(1024, [4,4], [2,2], scope='deconv') after net_conv and it didn't reduce the accuracy.
@rnsandeep hi, can you show me how to add deconv layer after net_conv? thanks so much.
Hey @rnsandeep, I'm trying to run the code on Mammogram images, which require high sensitivity to small objects, which is why I want the stride to be 2 if possible. Does that mean I should add one more layer that 8x the output feature map?
I want to change feat_stride from 16 to 8, so that model can be more sensitive to small object. And I change:
self._feat_stride = [16, ]
toself._feat_stride = [8, ]
But that's not enough, I get error as follow:I found the old version had
__C.DEDUP_BOXES
, but now it seem to be abandoned. So how can I change the the feat_stride now? Any suggestions will be helpful.