Open kevinchiu19 opened 1 month ago
Hello,
Thank you for your interest in our work!
I don't think I have tried to project the lidar points into 3D, only into 2D. There should be good resources in the official Waymo GitHub repo that can help you!
By the way, the monocular depth predictions from our work are disparity values, which is scale-ambiguous. You may need additional signals to get metric depth.
Hope this helps!
Thank you for your quick reply!
What I'm more confused about is the uv and depth read in this code. https://github.com/YihongSun/Dynamo-Depth/blob/ec886d5c2cf10daa3d7bcdfd3e842c4d80373802/prepare_data/waymo.py#L211 I see that there are also many processes that directly take 'points' directly as a point cloud in the ego coordinate system. This place seems to use 'cp_points' as uv, and the value of depth is ‘The distance between lidar points and vehicle frame origin. ’https://github.com/YihongSun/Dynamo-Depth/blob/ec886d5c2cf10daa3d7bcdfd3e842c4d80373802/prepare_data/waymo.py#L177
I don’t quite understand how the depth defined in this way can be projected into the 3D space.
I used your code (https://github.com/YihongSun/Dynamo-Depth/blob/ec886d5c2cf10daa3d7bcdfd3e842c4d80373802/tools.py#L193) to project depth gt to the camera coordinate system, but it seems to be different from how I projected it to depth through lidar and then to the camera coordinate system.
Ok, I think I understand the question now. I will dig into this for a bit, and get back to you!
Thank you for taking the time to help solve the problem!
The following code is the 'lidar->depth' conversion I tried. And I have tested 'depth->lidar' and it is correct. https://github.com/nnanhuang/S3Gaussian/blob/6c96925981a7a02328f382691d38a20bfe8c05a2/scene/dataset_readers.py#L829 https://github.com/nnanhuang/S3Gaussian/blob/6c96925981a7a02328f382691d38a20bfe8c05a2/scene/dataset_readers.py#L886
Then the depth obtained by this code is different from the depth mentioned above.
Thanks!
Hi,
Sorry for the delayed response, from an overview, it does appear that the current implementation may be flawed and thank you for your time for bringing this up!
I'm a bit tied up with deadlines at the moment, but I'll take care of this as soon as they’re out of the way.
First of all thank you for your great work!
Because now I want to use your work to get the metric depth of the waymo dataset, which is the correct lidar point cloud. But I read the code about GT depth, I can't project into the correct point cloud. I would like to ask how I should project the gt depth in your code into a lidar point cloud?
As shown in the picture below is the point cloud I projected, and I found that there is a little difference in offset or scale.
Looking forward to your reply!