jakeret / tf_unet

Generic U-Net Tensorflow implementation for image segmentation
GNU General Public License v3.0
1.9k stars 748 forks source link

Cannot train with max pooling size other than 2 #210

Open galaxyfanfanwu opened 6 years ago

galaxyfanfanwu commented 6 years ago

Hi, I noticed that the network cannot get trained with pool_size other than the default value of 2. The error message is: InvalidArgumentError (see above for traceback): input and filter must have the same depth: 96 vs 144 [[Node: up_conv_1/conv2d/Conv2D = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](up_conv_1/crop_and_concat/concat, up_conv_1/w1/read)]] I changed the numbers that are relevant to the max pooling size from 2-related to some other pool_size in unet.py as well as the definition of function 'deconv2d' in layers.py. But still, it didn't work... Are there some other ways to modify the pool_size in the code? Or it has to be 2 somehow? Thank you very much!

jakeret commented 6 years ago

Possible that there is a bug in the upconv or in the concatenation. Would be great if you could try to adapt the current implementation to see if is solves the problem.

galaxyfanfanwu commented 6 years ago

@jakeret Thanks Joel. If I set pool_size=3, for instance, the output_shape in 'deconv2d' is tf.stack([x_shape[0], x_shape[1]3, x_shape[2]3, x_shape[3]//3]). Is that correct? I'm not sure the last dimension should be x_shape[3] or x_shape[3]//3, but I tried both. As for 'crop_and_concat', I changed the offsets to be [0, (x1_shape[1] - x2_shape[1]) // 3, (x1_shape[2] - x2_shape[2]) // 3, 0]. After that the error message still showed up...

jakeret commented 6 years ago

I'm not sure where the bug is. Probably you have to step through the code and look at the tensor shapes