isht7 / pytorch-deeplab-resnet

DeepLab resnet v2 model in pytorch
MIT License
602 stars 118 forks source link

Training produces model generating blank segmentations #32

Closed aeonstasis closed 6 years ago

aeonstasis commented 6 years ago

Hi, thanks for the work implementing the model and training script.

I'm attempting to train on optical flow RGB image data with binary segmentation masks (where 0=background and 1=foreground). However, no matter my choice of hyperparameters or number of iterations (20k/40k/80k), the loss generally steadily decreases but the resulting model predicts all background pixels for input images at test time.

I've confirmed that the pretrained model segments correctly, so there's something wrong in the training process. The weights are non-zero but argmax always seems to choose class 0. Do you have any idea what might be wrong?

I'm using a GeForce GTX 1080 and Python 2.7 and am not encountering any memory or other such errors during training.

isht7 commented 6 years ago

Did you try to see what is the output when you test your trained network on the train images itself? If the output in that case is also completely class 0, then something is wrong with your training procedure. Did you set set the NoLabels to 2? Also check that you are viewing the output image properly because 1.0 or 0.0 out of 255.0 on an image would look black.

aeonstasis commented 6 years ago

The output from the training images is similarly blank. I've set NoLabels=2, and I've also confirmed the values by stepping through in IPython to generate the output image per evalpyt2.py and running np.unique() and np.where() to see that the values are all zero. This is pretty bizarre and I've been trying to figure this out. My segmentation masks are [0, 255] and I made sure to convert the 255 to 1 for foreground labels as your code expects.

I also requested the VOC augmented dataset from you earlier and tried training on that, running the exact command in the repo (train.py --lr 0.00025 --wtDecay 0.0005 --maxIter 20000 --GTpath <train gt images path here> --IMpath <train images path here> --LISTpath data/list/train_aug.txt) to train for 20k iterations to try to see if it was an issue with my data and I see the same output. I've attached example images comparing your pretrained model against the model output from my training (mine above, pretrained below):

viz viz

isht7 commented 6 years ago

What IOU do you get when you use evalpyt2.py to evaluate your model trained on VOC?

aeonstasis commented 6 years ago

I'm getting a mean IOU of 0.034486 and using the vanilla repository - though I removed unused imports from the top. The machine and cluster work fine for other training procedures.

isht7 commented 6 years ago

There is some error in what you are doing. Are you sure that you have not modified any part of the train script? Just after you reported lower accuracy, I re-ran my model to check its performance. I followed these steps exactly without any modification -

I have never used this or any related repository on the computer on which I am training my model currently.

The model is currently training, but I tested my model for the first saved model using evalpty2.py and got an mIOU for 0.594(59.4%). This value is expected and will improve to around 72.40% as reported in the readme. Please replicate my steps, I believe that you should get these results as well.

aeonstasis commented 6 years ago

I did remove unused imports and I corrected the integer division / casting per another issue, but it's definitely possible I modified something I forgot about. I greatly appreciate the time you've spent double-checking.

I needed to reduce the scale upper bound and side length in train.py to make the data fit in the GTX 1080's memory, but I got an IOU of 0.6477 for iteration=5000, which lines up with what you're saying. I'm going to check the training for my own dataset.