After successfully train fcn8s-atonce on my own dataset with Caffe, I wanted to familiarize with your implementation of FCN in Tensorflow. I decided to start with the training of nyud-fcn32s-color on NYUDv2 dataset (40 classes challenge) with heavy learning strategy (batch size: 1, unnormalized loss, lr: 1e-10, momentum: 0.99)
create NYUDv2DataHandler.py, equivalent of nyud_layers.py in Caffe
modifly loss.py to use an unnormalized loss
implement a training script
Training goes well in the first iterations but quickly my test loss starts to diverge but the most surprising is that my test metrics (global accuracy, mean accuracy per class, mean IoU) doesn't collapse at all, but oscillate a lot.
Regarding FCN paper, I would expect the following results:
gacc: 61.8%
macc: 44.7%
mIoU: 31.6%
Can anyone have a look to my repo and let me if I made something wrong ?
This issue drove me crazy for a lot of time now.
Hello,
After successfully train fcn8s-atonce on my own dataset with Caffe, I wanted to familiarize with your implementation of FCN in Tensorflow. I decided to start with the training of nyud-fcn32s-color on NYUDv2 dataset (40 classes challenge) with heavy learning strategy (batch size: 1, unnormalized loss, lr: 1e-10, momentum: 0.99)
I forked your tensorflow-fcn repo. Here's mine: https://github.com/howard-mahe/tensorflow-fcn I've made some simple modifications in order to:
Training goes well in the first iterations but quickly my test loss starts to diverge but the most surprising is that my test metrics (global accuracy, mean accuracy per class, mean IoU) doesn't collapse at all, but oscillate a lot.
Regarding FCN paper, I would expect the following results:
Can anyone have a look to my repo and let me if I made something wrong ? This issue drove me crazy for a lot of time now.
Thanks a lot for any feedbacks.