jakeret / tf_unet

Generic U-Net Tensorflow implementation for image segmentation
GNU General Public License v3.0
1.9k stars 748 forks source link

fix dropout = 1.0 issue. If dropout = 1.0, it should not run dropout … #202

Open mpjlu opened 6 years ago

mpjlu commented 6 years ago

Python Dropout op uses the following code to check keep_prob value: if tensor_util.constant_value(keep_prob) == 1: return x If keep_prob is placeholder, tensor_util.constant_value(keep_prob) will return None, if statement will always be false.

jakeret commented 6 years ago

Thx for you contribution. I see why this is better during training. But how should we control the dropout during validation and prediction? There we want to set the dropout to 1. Or am I missing something?

mpjlu commented 6 years ago

For prediction, we don't need dropout. If set dropout to 1. The right behavior is dropout layer return directly.

jakeret commented 6 years ago

Right. So during training we want dropout to be < 1 and during validation it should be = 1. How can we control this?

mpjlu commented 6 years ago

We can create two Unet with different keep_prob for training and validation. How do you think about it? Since the Dropout layer is very time-consuming, it is better to skip Dropout during validation and inference.

jakeret commented 6 years ago

Don't we have to train two models then? I wasn't aware that dropout is so time consuming. How much does it affect training/validation performance?

mpjlu commented 6 years ago

For inference, the Dropout is about 16% of iteration time. Second row of this picture. We don't need to train two model. Just need to create a new model (model with keep_prob = 1) for the inference/validation. image

mpjlu commented 6 years ago

Hi @jakeret , any comment on the data. The data is based on CPU.

jakeret commented 6 years ago

An 16% performance improvement is nice. However, i still don't fully understand how the training/validation procedure would look like. If a new model is created for validation, how would you transfer the learned weights?

mpjlu commented 6 years ago

I am sorry for reply later.
How about input two nets when creating Trainer object. The train_net for train, and the validation_net for validation. train_net can save the model for each epoch, and validation_net can restore the model for validation. What do you think about it?

jakeret commented 6 years ago

I don't see how this should be implemented. The computation-graph would be different for the two networks, which makes it hard to transfer the weights from one to the other

mpjlu commented 6 years ago

There is no weight for the dropout layer, it is ok to save model in the train net, and restore them in the validation net.