simonmeister / UnFlow

UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss
MIT License
296 stars 57 forks source link

occ loss implementation problem #8

Closed immars closed 6 years ago

immars commented 6 years ago

Hi,

in losses.py, forward-backward occlusion loss is implemented as:

...
    flow_diff_bw = flow_bw + flow_fw_warped
...
    fb_occ_bw = tf.cast(length_sq(flow_diff_bw) > occ_thresh, tf.float32)
...
        mask_bw *= (1 - fb_occ_bw)
...
    occ_bw = 1 - mask_bw
...
    losses['occ'] = (charbonnier_loss(occ_fw) +
                     charbonnier_loss(occ_bw))
...

However, gradients cannot BP through the tf.greater operation:

    test_g = tf.gradients(occ_bw, flow_diff_bw)
    print("occ gradient: %s" % test_g)

would output [None].

Does that means the occ loss is not working?

correct me if anything wrong,

Thanks

immars commented 6 years ago

this seems working:

    fw_occ_bound_margin = length_sq(flow_diff_fw) - occ_thresh
    bw_occ_bound_margin = length_sq(flow_diff_bw) - occ_thresh
    fw_occ_bound_margin = (tf.sign(fw_occ_bound_margin) + 1.0) / 2 * fw_occ_bound_margin
    bw_occ_bound_margin = (tf.sign(bw_occ_bound_margin) + 1.0) / 2 * bw_occ_bound_margin
    losses['occ'] = (charbonnier_loss(fw_occ_bound_margin, border_fw * fb_occ_fw) +
                     charbonnier_loss(bw_occ_bound_margin, border_bw * fb_occ_bw))

where border_fw border_bw are border masks.

simonmeister commented 6 years ago

Hi!

Thanks for noticing this. I have to look into this in more detail in the next days. Did you try if your correction changes training outcomes?

immars commented 6 years ago

I haven't got the time to fully run the training yet. value of occ loss is much smaller this way; maybe some weight tuning is required to get good results.

immars commented 6 years ago

by the way, @simonmeister how many minibatches are required to achieve AEE(All) 3.78 in Table. 3 for UnFlow-C?
So far I've tried (with original code) about 400k minibatches of size 4, single GPU, only got AEE/occluded 6.x in tensorboard.

simonmeister commented 6 years ago

On which dataset did you train? Did you pre-train on synthia? If you pre-trained first, 400K iterations on KITTI should get you close to our result. We used the same config settings as in the config.ini train_* sections.

immars commented 6 years ago

no pre-train to my believe, just dataset = kitti in config.ini from scratch.

So to reproduce your result I should first pre-train on synthia with supervised method, then train on kitti_raw with unsupervised loss?

simonmeister commented 6 years ago

The pre-training on synthia is also unsupervised. Yes, you can just use the default config with dataset = synthia first and then use dataset = kitti and finetune = NAME-OF-SYNTHIA-EXPERIMENT.

immars commented 6 years ago

Great, thanks!

yzcjtr commented 6 years ago

Hi, @simonmeister I have a question about the constant penalty in the Eq.(2) of your paper. As the binary occlusion mask is fully determined by the condition of Eq.(1), such constant penalty gives no help in backward propagation when you optimize the weights using SGD. Correct me if my understanding is wrong, thanks!

simonmeister commented 6 years ago

Hi! @yzcjtr you are correct, the occlusion flag is constant w.r.t. backpropagation, and the penalty should have no effect on backprop. @immars your snippet does not seem to have an effect on the training outcomes on what i have tried so far.

immars commented 6 years ago

ok, thanks! It seems that maybe this regularization term is not crucial, or a better form should be derived.

simonmeister commented 6 years ago

I am closing this as it seems to be redundant with the discussion in https://github.com/simonmeister/UnFlow/issues/10.