Closed XiaoyuShi97 closed 3 years ago
Thank you for your kind words! It has been a while since I read the DAIN paper, but if I remember correctly then it performs linear splatting of optical flow during the depth-aware flow projection where the depth serves as a weight (Equation 1 and 2 in their paper). The images/contexts are then backward warped using the forward warped optical flow (after an outside-in hole filling). This linear splatting of optical flows with subsequent backward warping of the color/features is differentiable but I am not sure whether they implemented it as such. In other words, their approach should make it possible to also supervise/fine-tune the flow and the depth estimator but I am not sure whether they have done it since I do not remember all the details from the DAIN paper anymore. With our proposed softmax splatting, one could improve upon the linear splatting in the DAIN paper and, since we provide a differentiable softmax splatting implementation, also supervise/fine-tune the flow and the depth estimator. Hope this info answers your questions and feel free to reach out again in case I didn't answer your questions or in case you have additional questions!
Hi,
It is the best paper I have read recently. I have one point not sure when comparing your work with DAIN. This is figure3 of DAIN. My understanding is that the red-box part is not differentiable. Or in other words, it just get output from off-the-shell optical and depth estimators and use them to approximate a flow map used for backward warp? But in your case, although you also use a off-the-shell optical flow estimator, the gradients can still pass to feature extractors. So features are fine tuned for better synthesized content. That is the main benefit of differentiable forward warp. Am I right?