Closed gopi77 closed 6 years ago
@gopi77 Here, it's getting R channel from mask image. https://github.com/akirasosa/mobile-semantic-segmentation/blob/master/data.py#L28
So, you will change it to get G channel...
mask_iter = mask_gen.flow(np.expand_dims(masks[:, :, :, 1], axis=4),
# use same seed to apply same augmentation with image
seed=seed)
Thanks :)
@akirasosa Similar question: If I have to make the network detect an object, say an airplane. I will create a dataset with 2000 images where airplanes are masked in red, and the remaining part of the scene is in blue. If I train this network on that dataset, will it be able to segment airplanes?
@sasikiran It would be possible. But creating binary masks is simpler in your case.
@akirasosa thanks. I don't think creating the dataset isn't that simple. Our designer is saying he would be able to do the masking part about 10 images in a single day and hence it would take 200 man days of effort to create a 2000 image dataset. These 2000 images would be taken from ms coco dataset.
@sasikiran Ah, sorry. What I mean is that red and green are not necessary. You can use binary image instead. The only reason I use RGB mask is LFW provides it as RGB mask...
In the case of airplane, situation may be different with hair. PSPNet like model may work better than UNet like model.
Thank you!
On Jan 11, 2018 at 6:40 PM, <Akirasosa (mailto:notifications@github.com)> wrote:
@sasikiran (https://github.com/sasikiran) Ah, sorry. What I mean is that red and green are not necessary. You can use binary image instead. The only reason I use RGB mask is LFW provides it as RGB mask...
In the case of airplane, situation may be different with hair. PSPNet like model may work better than UNet like model.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub (https://github.com/akirasosa/mobile-semantic-segmentation/issues/12#issuecomment-356929447), or mute the thread (https://github.com/notifications/unsubscribe-auth/ABRWXPx7jHStGpAAMNS4pyEMNw9YT-WDks5tJgg8gaJpZM4RKDPx).
@akirasosa Just want to follow up on your last point. I have the following binary ppm image.
When I try to run the train_full.py script, it produces the error below. I suppose the code needs changes to move from RGB to binary - could you point to exactly what those need to be? Thanks!
ValueError: ('Input data in 'NumpyArrayIterator' should have rank 4. You passed an array with shape', (80, 1, 1024))
@sectoreight Did you get the error on https://github.com/akirasosa/mobile-semantic-segmentation/blob/master/data.py#L28 , right?
masks[:, :, :, 0]
expects something like (m, h, w, 3) as masks. It would be the meaning of should have rank 4
on the error. But your array have the shape (80, 1, 1024)
.
The error's actually on a different line. Here's the entire log.
~/Developer/mobile-semantic-segmentation$ python train_full.py --img_file=data/images-1024.npy --mask_file=data/masks-1024.npy
Using TensorFlow backend.
Traceback (most recent call last):
File "train_full.py", line 96, in <module>
train(**vars(args))
File "train_full.py", line 21, in train
train_gen, validation_gen, img_shape = load_data(img_file, mask_file)
File "/home/sectoreight/Developer/mobile-semantic-segmentation/data.py", line 71, in load_data
mask_gen=train_mask_gen)
File "/home/sectoreight/Developer/mobile-semantic-segmentation/data.py", line 30, in _create_datagen
seed=seed)
File "/home/sectoreight/anaconda2/lib/python2.7/site-packages/keras/preprocessing/image.py", line 526, in flow
save_format=save_format)
File "/home/sectoreight/anaconda2/lib/python2.7/site-packages/keras/preprocessing/image.py", line 888, in __init__
'with shape', self.x.shape)
ValueError: ('Input data in `NumpyArrayIterator` should have rank 4. You passed an array with shape', (80, 1, 1024))
I would like to be able to use binary PPM files as masks. How do I do that? I'm referring to your earlier comment on using binary files.
@sasikiran
May I ask the shape of data/masks-1024.npy
?
n, 224, 224, 3
where n is the number of samples and in our case it’s 1024. 224 is the width and height of the image. 3 is the number of channels.
On Jan 30, 2018 at 9:38 AM, <Akirasosa (mailto:notifications@github.com)> wrote:
@sasikiran (https://github.com/sasikiran) May I ask the shape of data/masks-1024.npy?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub (https://github.com/akirasosa/mobile-semantic-segmentation/issues/12#issuecomment-361471321), or mute the thread (https://github.com/notifications/unsubscribe-auth/ABRWXDWxE8fPKnjxs7Lnk6yw2LHE_Sl7ks5tPpXXgaJpZM4RKDPx).
The back story is, we initially wanted to segment airplanes. Later, we thought of doing humans. For this, we downloaded about 1000 images from various datasets. Then, got someone who knows Photoshop to mask out everything in the image to 'blue' and let the human in the photo remain in 'red'.
Sadly, after training 250 epochs, we realized it is picking up 'red's from the original image instead of the mask. For example, if an image has a red flower or a red 'stop' sign, it is trying to learn those red things instead of the red masks added to humans.
Somewhere, something went wrong.
Ah! So sorry. I made mistake @sectoreight with @sasikiran ...
@sectoreight
May I ask the shape of data/masks-1024.npy
?
@sasikiran, any solution to your issue you last stated? @akirasosa any insight as to what his issue may be?
I don’t have any issues 🙂
Thanks.
On 13 Mar 2018, 9:52 PM +0530, Michael Ramos notifications@github.com, wrote:
@sasikiran, any solution to your issue you last stated? @akirasosa any insight as to what his issue may be? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
@sasikiran on Jan 30, you stated that "Somewhere, something went wrong.", where training had picked up red in the original image vs mask. Im just looking for insight or if you found the problem/solution?
The network didn't learn much and had errors because the custom dataset provided for the training was total junk. There was no way a network would learn something out of it and hence the results were very erroneous.
On Wed, Mar 14, 2018 at 4:36 PM, Michael Ramos notifications@github.com wrote:
@sasikiran https://github.com/sasikiran on Jan 30, you stated that "Somewhere, something went wrong.", where training had picked up red in the original image vs mask. Im just looking for insight or if you found the problem/solution?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/akirasosa/mobile-semantic-segmentation/issues/12#issuecomment-372983877, or mute the thread https://github.com/notifications/unsubscribe-auth/ABRWXHVT79DxJmSGXzaJQ5Wq9V21Uhwhks5tePnGgaJpZM4RKDPx .
@akirasosa Is their a way in the current process to build a model which can mark both the green & red pixels in image?
@akirasosa you wrote that for airplanes PSPNet could be better than UNet. For segmenting a person (not just a face+hair), do you think UNet is a good pick? Even when the person has arms up in the air? I've tested DeepLabv3+ and Mask R-CNN. DeepLabv3+ gives good results but it probably won't run on mobile. That's why I got interested in UNet. Another post mentions using Tiramisu (FC-DenseNets) to remove background of portrait photos which is still up-body, not full body, so less variations. Thanks for your input.
@akashdexati Yes. It is a multi class semantic segmentation.
@ldenoue Though I'm not sure that U-Net works well in your case, one of the advantages of U-Net is that it's easy to train. You will be able to get result soon. So, I recommend you to just try.
@akirasosa Below line is coded for only hair https://github.com/akirasosa/mobile-semantic-segmentation/blob/master/data.py#L28
How can I change it to have both hair & face(in different colors) in output ?
to get face I understand I can put 1 instead of 0. But to get both, what code should I use?
Hi
The ppm files of LFW dataset has face as green, hair as red & remaining area as blue.
Suppose if I want to train the model for face segmentation, what are the changes need to be done? Please let me know.
Regards Gopi. J