jakeret / tf_unet

Generic U-Net Tensorflow implementation for image segmentation
GNU General Public License v3.0
1.9k stars 748 forks source link

Testing Images #156

Open Ironeek11 opened 6 years ago

Ironeek11 commented 6 years ago

Is it possible to test images without having the corresponding ground truth? I have a dataset with MRI scans only which doesn't have ground truth. I wanted to produce a segmented output for this data. Your help would be much appreciated. Thank you!

jakeret commented 6 years ago

Once you have trained the network you can get predictions from it without any ground truth

TheoLiu31 commented 6 years ago

how can i start training without label images (_mask.tif)? When using "image_util.ImageDataProvider", if i don't have label images in my folder, i get the error message "missing label images". So i can't understand your response.

jakeret commented 6 years ago

To train the network you need a ground truth (supervised learning). If it's not in the same folder you can either adapt the implementation or write a script to move it

TheoLiu31 commented 6 years ago

Thank you for your quick answer, i understand better now. My images are generated by a home made microscopy and I tried to segment the cells with your model. I don't have ground truth images, so i have generated label images manually with simple threshold. The result seems ok for a first try.

I have 10 images (512*512) in my test folder and I need make the prediction for all the images. Below is my code:

from tf_unet import unet, util, image_util

data_provider = image_util.ImageDataProvider("/home/ubuntuDocuments/DeepLearning/tf_unet-master/data/MCTS/*.tif")

setup net & training

net = unet.Unet(channels=data_provider.channels, n_class=data_provider.n_class, layers=3, features_root=16, cost="cross_entropy") trainer = unet.Trainer(net, optimizer="momentum")

trainer = unet.Trainer(net, optimizer="adam", opt_kwargs=dict(beta1=0.91))

path = trainer.train(data_provider, "./unet_trained_mcts", training_iters=20, epochs=20, display_step=2, restore=False)

prediction

data, label = data_provider(10) prediction = net.predict(path, data)

unet.error_rate(prediction, util.crop_to_shape(label, prediction.shape)) print("Testing error rate: {:.2f}%".format(unet.error_rate(prediction, util.crop_to_shape(label, prediction.shape))))

img = util.to_rgb(prediction[..., 1].reshape(-1, prediction.shape[2], 1)) util.save_image(img, "prediction.jpg")

The problem with this code is all my 10 images are saved in the same prediction.jpg (the size is 472 * 4720). Is there a way to separate the 10 images individually into prediction_1.jpg to prediction_10.jpg? I m new to python programming so it would be great if you can help me out. Thanks again.

Théo

jakeret commented 6 years ago

Hi Theo Glad that helped. You could write a for loop from 0 to 9 and always retrieve a single image that you then feed into net.predict.

This being said, 10 images for training is probably rather on the low side. Typically, you would need at least an order of magnitude more training data to get really good results

TheoLiu31 commented 6 years ago

the images acquired with my microscopy is typical one stack 3D images (512512150), and i ususally have at least 48 stacks (one stack per hour during 2 days). As your opinion, how many images should i use.

when applying your model, i separate my 3D tiff images individually, that's why i test only 10 images for now. Once i get a good result, I am gonna write more scripts to load directly 3D tiff images. I programme mostly with C++ and opencv before and I am learning python programming with your Unet model now. Thanks very much for your help.