Open nk-dev0 opened 7 years ago
Could you try it without comment the resize params? new_height: 512 new_width: 512
The _cityscapesweights.caffemodel has changed blob size, because bn layers are merged. For finetuning you need the weights before the bn layer were merged. I have uploaded this weights (cityscapes_weights_before_bn_merge.caffemodel).
Hi, thanks for uploading the new weights. I can successfully train with them to high accuracy, but after I go through the BN computing and BN absorbing steps as outlined in the tutorial, my prediction images are all 1...
I trained the encoder-decoder network from scratch and followed the tutorial and got meaningful results, so I think there's something with the way I'm implementing the bn merge. Is the cityscapes_weights_before_bn_merge.caffemodel trained just using the normal encoder-decoder prototxt?
Hi, nkral. what is your accuracy on dataset 512x512 RGB with 2 classes? could you please share your weigh.caffemodel, and train.prototxt? I finetuned enet_decoder from cityscapes_weights_before_bn_merge.caffemodel on cityscapes data containing 19 classes, I got average 80% accuarcy, not IoU. I think this accuarcy is too low.
Thanks for this useful code. I'm trying to finetune the encoder-decoder model using cityscapes_weights.caffemodel. My own data is 512x512 RGB with 2 classes, so I think maybe this size discrepancy is causing the following error (after initializing from prototxt):
Do you have any insight into what might be causing this? I'm using all of the default settings other than commenting the resize params in the input blob, changing the number of classes in the deconvolution layer from 19 to 2, and adding class frequency weighting in the softmax layer.