kbardool / Keras-frcnn

Keras Implementation of Faster R-CNN
Apache License 2.0
395 stars 313 forks source link

how to change the receptive fields in faster rcnn? #80

Open Qcatbot opened 4 years ago

Qcatbot commented 4 years ago

I am dealing with the medical images of size 3000x5000 and have around 1500 images. I cant keep default image size to 600 as in the config.py (self.im_size = 600) as it takes long time to train. So I decided to scale down the image size to 300x500. But if I do that, the size of the tumors will be scale down to 32x32 pixel values (very small)... So I want to make sure the receptive fields do not miss the presence of tumors of this size. If the receptive field is large, I think there is a high probability to miss it... please someone could help me to figure out how to change the receptive fields as we want? Is it related to the anchor box sizes in config.py (self.anchor_box_scales = [64, 128, 256]) .

Arthur023 commented 4 years ago

If you're tumor is 32x32 pixels I would put as anchor_box_scale the following: [16,32,64]. You also want to make sure that you take more steps. So you want to change self.rpn_stride = 16. To something else so you take more steps.

But to be honest I would solve it differently. I would just cut your image in 4 with some preprocessing. That way you can give in a 1500x2500 image. Or maybe you can even cut them in 8

Qcatbot commented 4 years ago

Hi @Arthur023.. thanks for the advice. Unfortunately I am not in a position to apply your technique. I have to submit my thesis in few days.

My RPN working pretty well. Nicely loss is converging. However the total loss when I train the whole network is messing up!! Only in the first few epochs the loss has gone down.. after than no improvement. But my train_frcnn.py is still running at the moment!! Can I stop the training process and tune the parameters and restart it? will it work? Theoretically it might work.. but I am not sure.

Arthur023 commented 4 years ago

That is pretty weird that your total loss is not improving after the first few epoches. To be honest I have never stopped a training early. Well I do it but then I throw the model away. So I can't answer you there. However I assume it is possible.