Closed Hongdou68 closed 4 years ago
beacuse there is no opetical flow network to estimate W_refn
Hi, thanks for your interest! The comparison is done only for the refinement module, and the Rgb input is removed. This setting is described in detail in the paper.
Cordialement,
Syl.
Envoyé de mon iPhone
On 16 Feb 2020, at 11:07, Hongdou68 notifications@github.com wrote:
beacuse there is no opetical flow network to estimate W_refn
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.
Hi, thanks for your reply, from your paper ,I know the RGB image is removed , But I did not understand how to get wrap ToF depth map and wrap ToF amplitude which are the input of the ToF-KPN? or you only use original ToF amplitude and de-aliasing ToF depth map for ToF-KPN
Do you mean warping for alignment? In that comparison there is no RGB involved, so no alignment is needed. The ground truth depth is originally aligned with the ToF depth.
Cordialement,
Syl.
Envoyé de mon iPhone
On 16 Feb 2020, at 12:41, Hongdou68 notifications@github.com wrote:
Hi, thanks for your reply, from your paper ,I know the RGB image is removed , But I did not understand how to get wrap ToF depth map and wrap ToF amplitude which are the input of the ToF-KPN? or you only use original ToF amplitude and de-aliasing ToF depth map for ToF-KPN
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
Thanks,I understand it .And i have another question, from your paper , i have know how you genenrate the training data from the synthetic scene, but i have not found any program to do it, also do not known how to generate the synthetic scene. so do you have any code to do it?
Our data generation is based on Su’s deep end-to-end ToF imaging released code. You can find the code together with some scenes there.
Cordialement,
Syl.
Envoyé de mon iPhone
On 16 Feb 2020, at 14:43, Hongdou68 notifications@github.com wrote:
Thanks,I understand it .And i have another question, from your paper , i have know how you genenrate the training data from the synthetic scene, but i have not found any program to do it, also do not known how to generate the synthetic scene. so do you have any code to do it?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
okay, thank you very much for your help!!!
Hi, sylqiu: I have read your paper, in your paper compapred with DeepToF is given ,however the input of the ToF-KPN is RGB image 、 wraped ToF amplitude and wraped ToF depth image, when you train a ToF-KPN without RGB image , how to get wraped ToF amplitude and wraped ToF depth image ?