zhixuhao / unet

unet for image segmentation
MIT License
4.55k stars 1.99k forks source link

About train, validation and test #122

Open jizhang02 opened 5 years ago

jizhang02 commented 5 years ago

Hello, to someone may be concerned, In the U-Net, the original code only have training(trainGenerator) and predicting(predict_generator), So I wonder how to set training, validation and testing? Thanks to anyone who knows this answer!

jizhang02 commented 5 years ago

hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history]) I solved the problem by writing this way. Hope helpful to someone.

deaspo commented 5 years ago

hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history]) I solved the problem by writing this way. Hope helpful to someone.

Am curious how you defined the callbacks for tensorboard and history. If you don't mind can you share?

jizhang02 commented 5 years ago

tensorboard = TensorBoard(log_dir='./logs', histogram_freq=0,write_graph=True, write_images=False) this is to define tensorboard,and you need to import relevant packages. history = LossHistory() this is to define history, but LossHistory() is a class, which is to use draw curve based on log file. So this class is just to record the content of log file.

deaspo commented 5 years ago

tensorboard = TensorBoard(log_dir='./logs', histogram_freq=0,write_graph=True, write_images=False) this is to define tensorboard,and you need to import relevant packages. history = LossHistory() this is to define history, but LossHistory() is a class, which is to use draw curve based on log file. So this class is just to record the content of log file.

Thanks!

jcarta commented 4 years ago

hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history]) I solved the problem by writing this way. Hope helpful to someone.

What does your validGene implementation look like?

jizhang02 commented 4 years ago

hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history]) I solved the problem by writing this way. Hope helpful to someone.

What does your validGene implementation look like?

it is the similar with trainGene, but no data augmentation part.

jcarta commented 4 years ago

hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history]) I solved the problem by writing this way. Hope helpful to someone.

What does your validGene implementation look like?

it is the similar with trainGene, but no data augmentation part.

Something like this?

`def validGenerator(batch_size, val_path, image_folder, mask_folder, image_color_mode="grayscale", mask_color_mode="grayscale", image_save_prefix="val_image", mask_save_prefix="val_mask", flag_multi_class=False, num_class=2, save_to_dir=None, target_size=(256,256), seed=1):

image_datagen = ImageDataGenerator()
mask_datagen = ImageDataGenerator()

image_generator = image_datagen.flow_from_directory(
    val_path,
    classes = [image_folder],
    class_mode = None,
    color_mode = image_color_mode,
    target_size = target_size,
    batch_size = batch_size,
    save_to_dir = save_to_dir,
    save_prefix  = image_save_prefix,
    seed = seed)

mask_generator = mask_datagen.flow_from_directory(
    val_path,
    classes = [mask_folder],
    class_mode = None,
    color_mode = mask_color_mode,
    target_size = target_size,
    batch_size = batch_size,
    save_to_dir = save_to_dir,
    save_prefix  = mask_save_prefix,
    seed = seed)

train_generator = zip(image_generator, mask_generator)

for (img, mask) in train_generator:
    img, mask = adjustData(img,mask, flag_multi_class, num_class)
    yield (img, mask)`
jizhang02 commented 4 years ago

hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history]) I solved the problem by writing this way. Hope helpful to someone.

What does your validGene implementation look like?

it is the similar with trainGene, but no data augmentation part.

Something like this?

`def validGenerator(batch_size, val_path, image_folder, mask_folder, image_color_mode="grayscale", mask_color_mode="grayscale", image_save_prefix="val_image", mask_save_prefix="val_mask", flag_multi_class=False, num_class=2, save_to_dir=None, target_size=(256,256), seed=1):

image_datagen = ImageDataGenerator()
mask_datagen = ImageDataGenerator()

image_generator = image_datagen.flow_from_directory(
    val_path,
    classes = [image_folder],
    class_mode = None,
    color_mode = image_color_mode,
    target_size = target_size,
    batch_size = batch_size,
    save_to_dir = save_to_dir,
    save_prefix  = image_save_prefix,
    seed = seed)

mask_generator = mask_datagen.flow_from_directory(
    val_path,
    classes = [mask_folder],
    class_mode = None,
    color_mode = mask_color_mode,
    target_size = target_size,
    batch_size = batch_size,
    save_to_dir = save_to_dir,
    save_prefix  = mask_save_prefix,
    seed = seed)

train_generator = zip(image_generator, mask_generator)

for (img, mask) in train_generator:
    img, mask = adjustData(img,mask, flag_multi_class, num_class)
    yield (img, mask)`

yes 👍

gganes3 commented 4 years ago

@Ahgni - just checking whether the above idea worked correctly?

jcarta commented 4 years ago

hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history]) I solved the problem by writing this way. Hope helpful to someone.

What does your validGene implementation look like?

it is the similar with trainGene, but no data augmentation part.

Something like this? `def validGenerator(batch_size, val_path, image_folder, mask_folder, image_color_mode="grayscale", mask_color_mode="grayscale", image_save_prefix="val_image", mask_save_prefix="val_mask", flag_multi_class=False, num_class=2, save_to_dir=None, target_size=(256,256), seed=1):

image_datagen = ImageDataGenerator()
mask_datagen = ImageDataGenerator()

image_generator = image_datagen.flow_from_directory(
    val_path,
    classes = [image_folder],
    class_mode = None,
    color_mode = image_color_mode,
    target_size = target_size,
    batch_size = batch_size,
    save_to_dir = save_to_dir,
    save_prefix  = image_save_prefix,
    seed = seed)

mask_generator = mask_datagen.flow_from_directory(
    val_path,
    classes = [mask_folder],
    class_mode = None,
    color_mode = mask_color_mode,
    target_size = target_size,
    batch_size = batch_size,
    save_to_dir = save_to_dir,
    save_prefix  = mask_save_prefix,
    seed = seed)

train_generator = zip(image_generator, mask_generator)

for (img, mask) in train_generator:
    img, mask = adjustData(img,mask, flag_multi_class, num_class)
    yield (img, mask)`

yes 👍

I have one last question: what did you set batch_size to? Is this the same as the training generator or is it best to set it to 1?

jizhang02 commented 4 years ago

hist = model.fit_generator(trainGene, validation_data=validGene, validation_steps=3, steps_per_epoch=step_epoch, epochs=epochs, verbose=2, shuffle=True, callbacks=[model_checkpoint,tensorboard,history]) I solved the problem by writing this way. Hope helpful to someone.

What does your validGene implementation look like?

it is the similar with trainGene, but no data augmentation part.

Something like this? `def validGenerator(batch_size, val_path, image_folder, mask_folder, image_color_mode="grayscale", mask_color_mode="grayscale", image_save_prefix="val_image", mask_save_prefix="val_mask", flag_multi_class=False, num_class=2, save_to_dir=None, target_size=(256,256), seed=1):

image_datagen = ImageDataGenerator()
mask_datagen = ImageDataGenerator()

image_generator = image_datagen.flow_from_directory(
    val_path,
    classes = [image_folder],
    class_mode = None,
    color_mode = image_color_mode,
    target_size = target_size,
    batch_size = batch_size,
    save_to_dir = save_to_dir,
    save_prefix  = image_save_prefix,
    seed = seed)

mask_generator = mask_datagen.flow_from_directory(
    val_path,
    classes = [mask_folder],
    class_mode = None,
    color_mode = mask_color_mode,
    target_size = target_size,
    batch_size = batch_size,
    save_to_dir = save_to_dir,
    save_prefix  = mask_save_prefix,
    seed = seed)

train_generator = zip(image_generator, mask_generator)

for (img, mask) in train_generator:
    img, mask = adjustData(img,mask, flag_multi_class, num_class)
    yield (img, mask)`

yes 👍

I have one last question: what did you set batch_size to? Is this the same as the training generator or is it best to set it to 1?

The batchsize is the same as the training generator. If you wonder, you can compare different batchsize to see the results.