Open CHNxindong opened 5 months ago
This is also what I want to ask. Hope anyone can answer this.
Hi, thank you for noticing our work. During our training, we used the ground truth intrinsic and depth to calculate the ARAP loss (equ.4 in our paper), which is not appeared in our inference codes. Hope this helps
Hi, thank you for noticing our work. During our training, we used the ground truth intrinsic and depth to calculate the ARAP loss (equ.4 in our paper), which is not appeared in our inference codes. Hope this helps
Thank your reply! I wonder if this part of the code(gt depth maps and camera intrinsic to unproject pixels into 3D space) will be open source.
The pinhole camera equation is used to unproject a depth image/intrinsic to 3D. You can use this open3d api for the same.
Hello, thank you for this interesting project.
I would like to ask, what is the computational cost per a paired image? How does it depend on the number of tracking points? I could not find this information in the paper itself.
Also, what do you think about its effectiveness in extreme cases, such as tracking crossing pedestrians at a densely crowded intersection?
A good work! When I read the paper, as marked in yellow below, I didn't find a counterpart in the code. Can you give me some suggestions. Looking forward to your reply!