Open truc-h-nguyen opened 2 years ago
I think the IOU will get values from the sigmoid, so that should be okay
Are the val/train generator being fed the same images with the same seed?
Yes, this is how I set a side a validation images:
val_samples = 60
train_imgs = coco_imgs[:-val_samples]
train_masks = coco_masks[:-val_samples]
val_imgs = coco_imgs[-val_samples:]
val_masks = coco_masks[-val_samples:]
Then fed the images to val/train generator:
train_generator = DataGenerator(input_img = train_imgs, input_mask = train_masks, image_size = img_size,
augmentation=True, batch_size = 5)
val_generator = DataGenerator(input_img = val_imgs, input_mask = val_masks, image_size = img_size,
augmentation=True, batch_size = 5)
train_imgs[14], train_generator[14], val_generator[14]
show the same image, while val_imgs[14]
shows a different image.
I had the seed before but dropped it because I think it's not necessary. Regardless the seed, train_generator[14], val_generator[14]
still show the same image.
@nickvazz I finally get the model to work! I changed the activation for the last layer to
sigmoid
(so the range of values could be between 0 and 1 as you mentioned). I also tried"leakyrelu"
in stead of"relu"
`. The results look potential but I'm not sure if it goes well with IoU which needs positive values. I updated IoU metric with new code from here.I had a trouble with
val_generator
. I got different images fortrain_imgs[14]
andval_imgs[14]
, buttrain_generator[14]
andval_generator[14]
show the same picture.And I believe the mask the model predicts is the combination of multiple objects.