Open wjNam opened 7 years ago
Hi, @segnam I think the data augmentation is right, if the flipud image is normal case in the testing case. (In image classification, we seldom use the flipud since it rarely appears in the test set.) I find you also add flip to the label. The output of your network is a matrix rather than label? There are serval suggestion for you to debug.
Thanks @layumi I used this network to image semantic segmentation, so label is composed of matrix
You mean that flipud is not used as an training data augmentation?
Is fliplr enough to augment data?
Thanks for your reply
@segnam Semantic segmentation is interesting~ If you have enough data (>10,000?), I think it is enough to train a general network. (like COCO dataset~) My point is for many cases, the below objects (such as sea, beach) do not appear in the top part of the picture. And sky do not appear in the below part of the picture. So flipud is consider to be unreasonable.
I have read some papers about optical flow. I think they may give you some insight about pixel2pixel network. I am learning them. They do not add additional augmentation. Deep End2End Voxel2Voxel Prediction FlowNet: Learning Optical Flow with Convolutional Networks
I want to prevent overfitting by using data augmentation
But when augmentation is used, objective is increased to NAN
I used mean frequency balancing in loss function, but there was no problem when i didn't use augmentation
I just simply augment data like below.
rn = randi(3); if rn == 1 for p = 1:netnum im{p}=fliplr(im{p}) ; labels{p} = fliplr(labels{p}); end elseif rn == 2 for p = 1:netnum im{p}=flipud(im{p}) ; labels{p} = flipud(labels{p}); end end
data and label is randomly flipped left-right or up-down
Is this wrong way to augment data?
please help me, i'm stuck :(