Closed dimaxano closed 6 years ago
Are you training on your own dataset?
@nicolov Yes, it is my own dataset. Tell me, please, whether I am wrong or right: i need to convert my png's labels into palette images and store them on disk (for now i am trying to convert png's to palette online)
The training and prediction steps operate on single-channel pngs, where the value of each pixel corresponds to the class label (for example, 0-21 for the Pascal dataset). That's why it's normal that you're seeing black images (21/255 is a rather dark pixel). To be able to debug, you can map these values to any arbitrary set of colors.
To clear things up, you can look at convert_masks.py
, where I convert a matlab dataset to a png for training.
@nicolov I solved problems with preparing dataset, but after training, model predicts the whole image as one class, not background (binary segmentation). I checked dataset and it is completely correct. Is there any version? Here is example of code for creating labels from usual png's https://gist.github.com/dimaxano/449643c49b4cbc136d17ade8721a1ea2
Does it seems correct?
@nicolov Another question)
How should I change label_margin
param for exact dataset?
How many classes do you get? How can you be sure that your dataset is correct? Have you counted the number of 0/1 pixel in the training images.
I reverse-engineered the label_margin
variable based on the original caffe implementation, but forgot the details.
I am not currently working with DilatedNet, but if i am not mistaken, problem was in overfitting and thy my model predicted only one class
Hi! I have val accuracy = 1, but when i am trying to predict mask on the image from train set it displays me black image. Does anybody know what is the reason of this behaviour?