zhixuhao / unet

unet for image segmentation
MIT License
4.58k stars 2k forks source link

training accuracy go 1 when change to my own dataset #182

Open yongshuo-Z opened 4 years ago

yongshuo-Z commented 4 years ago

The code works well with the membrane dataset. But when I changed it to my own dataset and train the model, the accuracy increased to 1 on epoch 2. And my test prediction images are all black. I did not change a line of the original code except change the path of the dataset. Can anyone tell me a solution please?

wrbbb commented 4 years ago

The code works well with the membrane dataset. But when I changed it to my own dataset and train the model, the accuracy increased to 1 on epoch 2. And my test prediction images are all black. I did not change a line of the original code except change the path of the dataset. Can anyone tell me a solution please?

Change the test image to 8 bits and use the model to predict. If it is still black, try again, change the training set and retrain the model

yongshuo-Z commented 4 years ago

The code works well with the membrane dataset. But when I changed it to my own dataset and train the model, the accuracy increased to 1 on epoch 2. And my test prediction images are all black. I did not change a line of the original code except change the path of the dataset. Can anyone tell me a solution please?

Change the test image to 8 bits and use the model to predict. If it is still black, try again, change the training set and retrain the model

Thanks for your advice. I have changed my original training set to the green channel, which is 8 bits image. But sadly the situation is the same as before... Below is my code and training info. It seem the network does not converge at all.

''' data_gen_args = dict(rotation_range=0.2, width_shift_range=0.05, height_shift_range=0.05, shear_range=0.05, zoom_range=0.05, horizontal_flip=True, fill_mode='nearest') myGene = trainGenerator(2,'../unet/data/idrid/train','enhanced_green','oplabel',data_gen_args,save_to_dir = None) model = unet() model_checkpoint = ModelCheckpoint('unet_optic.hdf5', monitor='loss',verbose=1, save_best_only=True) model.fit_generator(myGene,steps_per_epoch=2000,epochs=5,callbacks=[model_checkpoint])

Epoch 1/5 Found 54 images belonging to 1 classes. Found 54 images belonging to 1 classes. 2000/2000 [====================>] - ETA: 1:36 - loss: 0.0126 - acc: 0.9999

wrbbb commented 4 years ago

该代码与膜数据集配合良好。但是,当我将其更改为自己的数据集并训练模型时,在第2个阶段,其准确性提高到1。我的测试预测图像全为黑色。除了更改数据集的路径外,我没有更改任何原始代码。谁能告诉我解决方案?

将测试图像更改为8位,然后使用模型进行预测。如果仍然是黑色,请重试,更改训练集并重新训练模型

谢谢你的建议。我将原始训练设置更改为绿色通道,即8位图像。但是可悲的是,情况与以前相同。以下是我的代码和培训信息。网络似乎根本无法融合。

''' data_gen_args = dict(旋转范围= 0.2, width_shift_range = 0.05, height_shift_range = 0.05, shear_range = 0.05, zoom_range = 0.05, horizo​​ntal_flip = True, fill_mode ='nearest') myGene = trainGenerator(2,'.. / unet / data / idrid / train','enhanced_green','oplabel',data_gen_args,save_to_dir = None) model = unet() model_checkpoint = ModelCheckpoint('unet_optic.hdf5',monitor ='loss',verbose = 1,save_best_only = True) 模型.fit_generator(myGene,steps_per_epoch = 2000,epochs = 5,callbacks = [model_checkpoint])

时代1/5 找到54图像属于1类。 找到54个属于1类的图像。 2000/2000 [===================>]-ETA:1:36-损失:0.0126-acc:0.9999

Whether the target area of your picture is very small. In this case, although the model accuracy is high, the model only predicts the background, and the loss of the target area will be covered. You can try to change the loss function to dice. Attention unet can also solve the problem of small target area.

yongshuo-Z commented 4 years ago

该代码与膜数据集配合良好。但是,当我将其更改为自己的数据集并训练模型时,在第2个阶段,其准确性提高到1。我的测试预测图像全为黑色。除了更改数据集的路径外,我没有更改任何原始代码。谁能告诉我解决方案?

将测试图像更改为8位,然后使用模型进行预测。如果仍然是黑色,请重试,更改训练集并重新训练模型

谢谢你的建议。我将原始训练设置更改为绿色通道,即8位图像。但是可悲的是,情况与以前相同。以下是我的代码和培训信息。网络似乎根本无法融合。 ''' data_gen_args = dict(旋转范围= 0.2, width_shift_range = 0.05, height_shift_range = 0.05, shear_range = 0.05, zoom_range = 0.05, horizo​​ntal_flip = True, fill_mode ='nearest') myGene = trainGenerator(2,'.. / unet / data / idrid / train','enhanced_green','oplabel',data_gen_args,save_to_dir = None) model = unet() model_checkpoint = ModelCheckpoint('unet_optic.hdf5',monitor ='loss',verbose = 1,save_best_only = True) 模型.fit_generator(myGene,steps_per_epoch = 2000,epochs = 5,callbacks = [model_checkpoint]) 时代1/5 找到54图像属于1类。 找到54个属于1类的图像。 2000/2000 [===================>]-ETA:1:36-损失:0.0126-acc:0.9999

Whether the target area of your picture is very small. In this case, although the model accuracy is high, the model only predicts the background, and the loss of the target area will be covered. You can try to change the loss function to dice. Attention unet can also solve the problem of small target area.

Thanks a lot for your reply. I'll try!

lyc1995452-star commented 4 years ago

I have face the same problem,how do you sovle it?

wrbbb commented 4 years ago

I have face the same problem,how do you sovle it?

I think you can check the pixel value of the output image, even if the segmentation effect is not good, the output is distinguishable

lyc1995452-star commented 4 years ago

I have face the same problem,how do you sovle it?

I think you can check the pixel value of the output image, even if the segmentation effect is not good, the output is distinguishable

my image data is 24 rgb and I need multi_class,so I convet it to true, but have some error.

ValueError: Error when checking target: expected conv2d_24 to have 4 dimensions, but got array with shape (2, 65536, 12)

happy20200 commented 4 years ago

Where can I find the test file? Why can't I find it? @wrbbb

1651061080 commented 4 years ago

I also have the same problem I think is the label set but I don't know how to solve it. My problem is to use radar images to forecast the weather

deaspo commented 4 years ago

I also have the same problem I think is the label set but I don't know how to solve it. My problem is to use radar images to forecast the weather

rgb data? And you need rgb labels also?