Closed martinkersner closed 7 years ago
@martinkersner Thanks for your question and also your scripts to train CRF-RNN. :)
You might find Andrea Vedaldi's FCN training script useful! https://github.com/vlfeat/matconvnet-fcn
@bittnt If there are only two classes i.e. background and preson, how to discount the background in your code, just set 255 as the label or need to do something else? And during back propagation, how to ignore the background's label for only two classes?
ignore_label:255 in your loss layer.
@bittnt if the background class is ignored, only one class is left. How about the performance of cnnasrnn for only one class labeling?
For person, check this new paper: https://hal.inria.fr/hal-01255765/document
@bittnt Thanks. Does the cnnasrnn work for only one class (except background) segmentation?
The public available CRF-RNN model and code works for 20 object classes (PASCAL VOC). In our ICCV papers, we also show it works for 59 object classes (Pascal Context). Check our demo: http://www.robots.ox.ac.uk/~szheng/crfasrnndemo
Hello @bittnt
I am training your network with data generated by matconvnet fcn script as you suggested to @martinkersner and still getting not so good results.
Training FCN-8 alone using matconvnet fcn gives better results.
I am trying to segment surgical tools from the background and so far I got a lot of blobs that are spread across the image. Some of them are correct but even though, I get a better results with FCN-8 that gives me a well defined blob where the actual tool is located and false detections are rare. The reason that I am trying to use your network is that I want to get nice borders which FCN-8 doesn't give me.
I think I am missing something. I am using this repository to train net https://github.com/martinkersner/train-CRF-RNN
and matconvnet fcn script to generate examples.
Could you, please, give an advice or recommendation?
Thank you.
@warmspringwinds once you have pre-trained FCN-8 model, you should plug in the CRF layer into your FCN-8-train-test.prototxt. e.g. in Pascal VOC CRF-RNN-train-test.prototxt would looks like this:
.... layers { type: CROP name: 'crop' bottom: 'bigscore' bottom: 'data' top: 'coarse' }
layers { type: SPLIT name: 'splitting' bottom: 'coarse' top: 'unary' top: 'Q0' }
layers {should name: "inference1" type: MULTI_STAGE_MEANFIELD bottom: "unary"should bottom: "Q0" bottom: "data" top: "upscore" blobs_lr: 10000 blobs_lr: 10000 # learning rate for w_B blobs_lr: 1000 # learning rate for compatibility transform multi_stage_meanfield_param { num_iterations: 5 compatibility_mode: POTTS threshold: 2 theta_alpha: 50 theta_beta: 3 theta_gamma: 3 spatial_filter_weight: 3 bilateral_filter_weight: 5 }should }
layers { name: "loss" type: SOFTMAX_LOSS bottom: "upscore" bottom: "label" top: "loss" loss_param { ignore_label: 255 normalize: false } include: { phase: TRAIN } } ...#add your favourite segmentation accuracy layer.
After training end-to-end, you should expect better boundaries if things work out.
After this training, the testing loss might started to jump up and down, which means the training might be converged. You might stop this, and change the learning rate for the last layer CRF layer lower to continue training from the best model you got from previous stage (not necessary the last model).
Let me know if you have further progress or questions.
Closing old issues with no recent activity.
Hi!
I tried to train CRF-RNN network with 3 classes: bird (801 images), bottle (747 images) and chair (1071 images). 90 % of data were employed for training and the rest 10 % for testing. After 90 thousand iterations, loss of a model decreased approximately to 82 thousand (seemed stagnating for relatively long time) during test phase, and output from trained network is still very poor.
Questions
Thank you!
Martin