Open ddb4ng opened 6 years ago
Looks like a nice competition!
The resulting predictions are probability maps. Very small numbers might be a hint that the nework hasn't learned much
Thank you very much for your valuable comments @jakeret . I will let you know how it went :)
Performance got significantly better with your suggestions above. I haven't tried different layers and filters yet though. Also, I got good performance with a different repository, so there is currently no need for me to investigate further.
Out of curiosity: could you point us to this other repo?
Sorry for my very late reply. I am using and contributing to the semantic segmentation system from TU Eindhoven, MPS group: https://github.com/tuemps/semantic-segmentation . Their system is not ready for public use yet though. I am also using DeepLab (for stuff related to the Carvana dataset): https://github.com/tensorflow/models/tree/master/research/deeplab
Hello @ddb4ng,
I'm trying to use test.py
as you described, but always getting a black output, why is that?
Are there any changes I have to make after retrieving the saved model?
Thanks.
@abderhasan I didn't get it to work well myself either. Maybe @jakeret can help you. Or you could try this alternative, with which we got really good results on the Kaggle Carvana dataset: https://github.com/petrosgk/Kaggle-Carvana-Image-Masking-Challenge
@jakeret The link above points to the repository that we finally used to get really good results.
Goal
First of all, thank you very much jakeret et al. for this wonderful project. It is a great piece of work, that is also usable for others.
My goal is to train TensorFlow UNet on the Kaggle Carvana dataset. This is an image segmentation problem with 2 classes: car and background.
Issue
Everything works, but I do not get good results yet (see below).
I am hoping to improve the results by changing hyper parameters or network layers. But which ones should I change and how? I already tried some different hyper parameter values (see below), but without much success.
I looked at the issues that were already posted. Some are related, but no solution yet. Maybe we can bundle our efforts:
170
168
112
Sample output
Results generated during cross validation
Below are some results generated during cross validation (left:input, center: ground-truth, right: output - original input data resized to 512x512). These results are not very good, but at least it does something meaningful.
Results generated during testing
Below are some results generated during testing on previously unseen data. These are the results that count, but they are not satisfactory.
Code
/tf_unet/unet.py : Trainer._get_optimizer():
train.py (not in tf_unet repository):
test.py (not in tf_unet repository):
Discussion
As you can see, the segmentation of car and background is not working well yet. Which hyper parameter values or network layers will lead to success?
Also, why do the cross validation results and the test results look so different? And why do I need to multiply the segmentation by 1000 in order to see the test result? Shouldn't these by binary masks? If you look closely, not all pixels intensities are the same either.
Thank you very much in advance for any help you can provide (anyone).