warmspringwinds / tf-image-segmentation

Image Segmentation framework based on Tensorflow and TF-Slim library
MIT License
549 stars 188 forks source link

regarding use this framework for my semantic segmentation work #9

Open surfreta opened 7 years ago

surfreta commented 7 years ago

Hi,

I have several questions regarding using this library

1) If the studied data set is of totally different domain with the typical benchmark set, such as PASCAL VOC, What should be the right pipeline of using your framework. Can I still use the pre-trained model(weight) and re-train the model using my dataset?

2) The problem I am studying is of limited number of images, each of this image has large sizes, i.e., 4096 4096 pixels. The masked area is about 5%~10% areas of each image.I have been thinking of generating large samples of training set from these large images, each training image is of 128 128. In other words, building a model based on 128 * 128.

During testing stage, conduct the sub-frame prediction (each sub-frame is 128*128) over the test image, and stitch these predicted mask together. Is this the right approach?

Besides, are there any suggestions on generate those training set?

MrChristo59 commented 7 years ago

I'm also very interested in any advices for generating a new dataset to be trained.

warmspringwinds commented 7 years ago

@surfreta @MrChristo59

I will load an example of usage for different dataset that I have done recently.

Small number of images is usually a problem.

Reusing pretrained weights won't make it worse I think. At least, all of the works that I have seen before, use pretrained weights.

Let me know if it helps.

MrChristo59 commented 7 years ago

Don't know if I understood it right but will you upload a example of how to re-train the model with a new dataset ? If i'm right, that will be awesome !

warmspringwinds commented 7 years ago

@MrChristo59 , yeah, that is what I meant :)

MrChristo59 commented 7 years ago

Looking forward to it.

Just a little question to be sure I'm right. For creating a dataset to be trained for segmentation, you need an image and a another one with the mask of what you want to learn. I guess the color of the mask will define the type of class it will refer to. Am I right ? If yes, are there any advice on the proper way to do this mask (border size and color...) Thanks

MrChristo59 commented 7 years ago

Hey Dannill, Did you release the exemple yet ? Don't know if it's on your blog or on the git.

deepk91 commented 6 years ago

Hey @warmspringwinds Did you upload any example of training a new dataset for your scripts? I am trying to train a new dataset which less number of images around 250 but facing the error of OutOfBound Error as listed in the issues. Could you help resolve this problem?

warmspringwinds commented 6 years ago

Hi,

I would recommend trying this out, because it has an example of applying it to different dataset:

https://github.com/warmspringwinds/pytorch-segmentation-detection

2017-12-21 14:18 GMT-05:00 deepk91 notifications@github.com:

Hey @warmspringwinds https://github.com/warmspringwinds Did you upload any example of training a new dataset for your scripts? I am trying to train a new dataset which less number of images around 250 but facing the error of OutOfBound Error as listed in the issues. Could you help resolve this problem?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/warmspringwinds/tf-image-segmentation/issues/9#issuecomment-353434716, or mute the thread https://github.com/notifications/unsubscribe-auth/ACYrB0W5uAA3Jfe-mIO7FaCXGA2nCgWbks5tCq71gaJpZM4MCIan .

deepk91 commented 6 years ago

Thank you @warmspringwinds for this suggestion. I want to use FCN32s model for segmentation purpose initialized by VGG16. After going through some of your files what I understood is the script pascalvoc.py in dataset makes use of PASCAL 2012 and Berkeley Pascal dataset which you mentioned in this repository as well. I can substitute the root path to my dataset and it works similar to generating tfrecords by using getannotationpairs methods in utils.pascal_voc.py. What I couldnot understand is where is the explicit example to use different dataset? I am Sorry I am just new to Deep Learning using CNN.