jakeret / tf_unet

Generic U-Net Tensorflow implementation for image segmentation
GNU General Public License v3.0
1.9k stars 748 forks source link

Why the minibatch loss so strange #162

Open rangerli opened 6 years ago

rangerli commented 6 years ago

here is my code:

from tf_unet import unet, util, image_util

preparing data loading

search_path = 'data/train/*.tif' data_provider = image_util.ImageDataProvider(search_path)

setup & training

net = unet.Unet(layers=4, features_root=64, channels=data_provider.channels, n_class=2) trainer = unet.Trainer(net, optimizer='adam') path = trainer.train(data_provider, './unet_trained', training_iters=64, epochs=100)

There are 16000 images(500*500) in my datasets. Running the code: image here is the result,i feel so confused. Can u give me some advice to code? Thanks a lot.

jakeret commented 6 years ago

Hmm odd that the loss is the same for all mini batches. Does it change after a few iterations? Do you get the same result in every time you run it? I would try it with a smaller network and dataset to get a feeling what is not working correctly.

rangerli commented 6 years ago

@jakeret Thanks for your reply, the loss for mini batches will eventually decreased to the 176.7524. I have got the same results every time with different layers and feature roots.

jakeret commented 6 years ago

The loss is unnaturally high and should decrease every epoch. Given that the loss is always the same, independet of the net architecture I suspect that something might no be ok with the input data. It might be worth checking what data_provider(1) returns. Does the data look like what you expect? Is it within reasonable range?