Closed immars closed 6 years ago
this seems working:
fw_occ_bound_margin = length_sq(flow_diff_fw) - occ_thresh
bw_occ_bound_margin = length_sq(flow_diff_bw) - occ_thresh
fw_occ_bound_margin = (tf.sign(fw_occ_bound_margin) + 1.0) / 2 * fw_occ_bound_margin
bw_occ_bound_margin = (tf.sign(bw_occ_bound_margin) + 1.0) / 2 * bw_occ_bound_margin
losses['occ'] = (charbonnier_loss(fw_occ_bound_margin, border_fw * fb_occ_fw) +
charbonnier_loss(bw_occ_bound_margin, border_bw * fb_occ_bw))
where border_fw
border_bw
are border masks.
Hi!
Thanks for noticing this. I have to look into this in more detail in the next days. Did you try if your correction changes training outcomes?
I haven't got the time to fully run the training yet. value of occ loss is much smaller this way; maybe some weight tuning is required to get good results.
by the way, @simonmeister how many minibatches are required to achieve AEE(All) 3.78 in Table. 3 for UnFlow-C?
So far I've tried (with original code) about 400k minibatches of size 4, single GPU, only got AEE/occluded 6.x in tensorboard.
On which dataset did you train? Did you pre-train on synthia? If you pre-trained first, 400K iterations on KITTI should get you close to our result. We used the same config settings as in the config.ini train_* sections.
no pre-train to my believe, just dataset = kitti
in config.ini
from scratch.
So to reproduce your result I should first pre-train on synthia with supervised method, then train on kitti_raw with unsupervised loss?
The pre-training on synthia is also unsupervised. Yes, you can just use the default config with dataset = synthia first and then use dataset = kitti and finetune = NAME-OF-SYNTHIA-EXPERIMENT.
Great, thanks!
Hi, @simonmeister I have a question about the constant penalty in the Eq.(2) of your paper. As the binary occlusion mask is fully determined by the condition of Eq.(1), such constant penalty gives no help in backward propagation when you optimize the weights using SGD. Correct me if my understanding is wrong, thanks!
Hi! @yzcjtr you are correct, the occlusion flag is constant w.r.t. backpropagation, and the penalty should have no effect on backprop. @immars your snippet does not seem to have an effect on the training outcomes on what i have tried so far.
ok, thanks! It seems that maybe this regularization term is not crucial, or a better form should be derived.
I am closing this as it seems to be redundant with the discussion in https://github.com/simonmeister/UnFlow/issues/10.
Hi,
in
losses.py
, forward-backward occlusion loss is implemented as:However, gradients cannot BP through the
tf.greater
operation:would output
[None]
.Does that means the occ loss is not working?
correct me if anything wrong,
Thanks