Closed Haawron closed 3 years ago
Sorry I fed wrong plane information which have estimated heights over 2.5 meters. I manually edited those files and got beautiful results.
And answering myself, translation parts doesn't affect that much. Closing this issue.
@Haawron do you mind sharing your steps to get a good result? I've set my projection matrices and kept my rotation and translation matrices as identity matrices.
Im using your repo in my mono 3d detection project in which I need to feed my own dataset.
In this case, I think when we estimate the point cloud from the depthmap of the image, both data have the same origins(no displacement from velo-cam). I think in turn we need to set the translation part of P2, R0_rect and Tr_velo_to_cam to 0 vector
E.g, for P2 [.99 -.01 0 44.82] [.01 .99 0 .22] [0. 0. 1. 0.] to [.99 -.01 0 0] [.01 .99 0 0] [0. 0. 1. 0.] (the last column in this case)
But i got poor results IN BOTH CASES!!
So my questions are
- How did you handle those? The modification of calib mats are not mentioned in the paper.
- Have you fed your custom dataset? What did I miss? Or is it common to do so poor in 3d detection tasks for custom data?
where we can define P2, R0_rect and Tr_velo_to_cam?i use my phone get a picture and use depth anything model to get a depth map, now i need to convert depth map into lidar format data.how can we do that?
@Genozen Where you able to have good results. I already have depth maps and trying to convert those to point-cloud but not getting good results. What will be the calib file if you are using some DL model to get depth images, given that you have intrinsic matrix of the camera.
Im using your repo in my mono 3d detection project in which I need to feed my own dataset.
In this case, I think when we estimate the point cloud from the depthmap of the image, both data have the same origins(no displacement from velo-cam). I think in turn we need to set the translation part of P2, R0_rect and Tr_velo_to_cam to 0 vector
E.g, for P2 [.99 -.01 0 44.82] [.01 .99 0 .22] [0. 0. 1. 0.] to [.99 -.01 0 0] [.01 .99 0 0] [0. 0. 1. 0.] (the last column in this case)
But i got poor results IN BOTH CASES!!
So my questions are
How did you handle those? The modification of calib mats are not mentioned in the paper.
Have you fed your custom dataset? What did I miss? Or is it common to do so poor in 3d detection tasks for custom data?