I'm trying to finetune the model with my own data from the provided pretrained model checkpoint model.ckpt-73018,
but getting this weird issue: First (until about 500 steps) the model seems to find only windows, doors etc. large shapes (occasional large text boxes too) and also keeps predicting the whole image as a box. After that, it stops giving me any predictions (trying with lower pixel confidence thresholds 0.6 reveals the same kind of predictions but with lower confidences). My data has image size 600x600. I also tried with icdar2015 data, converted with the icdar2015_to_tfrecords.py, and got similar issues. I have tried with learning rate 1e-4 as well as the recommended 1e-2. When I check the traininng images in tensorboard, they are ok and the bounding boxes are drawn correctly. Also, the loss curve looks more or less possible, it decreases until that 500 steps (down to about 1.4) and then settles.
I'm training with batch size: 8, one gpu. I'm running the code on python 3.5 so i had to make some changes for that. I'm getting the predictions with test_pixel_link.py, and it gets good predictions with the provided pretrained model (both for my own data and for the icdar2015 data).
I'm trying to finetune the model with my own data from the provided pretrained model checkpoint model.ckpt-73018, but getting this weird issue: First (until about 500 steps) the model seems to find only windows, doors etc. large shapes (occasional large text boxes too) and also keeps predicting the whole image as a box. After that, it stops giving me any predictions (trying with lower pixel confidence thresholds 0.6 reveals the same kind of predictions but with lower confidences). My data has image size 600x600. I also tried with icdar2015 data, converted with the icdar2015_to_tfrecords.py, and got similar issues. I have tried with learning rate 1e-4 as well as the recommended 1e-2. When I check the traininng images in tensorboard, they are ok and the bounding boxes are drawn correctly. Also, the loss curve looks more or less possible, it decreases until that 500 steps (down to about 1.4) and then settles.
I'm training with batch size: 8, one gpu. I'm running the code on python 3.5 so i had to make some changes for that. I'm getting the predictions with test_pixel_link.py, and it gets good predictions with the provided pretrained model (both for my own data and for the icdar2015 data).
Any ideas what might be going on?