Open sarimmehdi opened 4 years ago
I have not yet read Yao-Shao's code.
You can also try our LiDAR->Camera projection lib. https://github.com/waymo-research/waymo-open-dataset/blob/master/third_party/camera/ops/camera_model_ops_test.py
You can find example of camera projection in this lib as well.
Hopefully this helps.
Hi. Thank you for your reply. I looked at your code but I cannot see how you construct the intrinsic and extrinsic parameter matrix. In your code, you take the extrinsic and intrinsic parameters as a list input:
image_points_t = py_camera_model_ops.world_to_image(
extrinsic, intrinsic, metadata, camera_image_metadata, global_points)
I decided to follow your code to its C++ implementation (camera_model_ops.cc) but even there I can't see where you create the matrices. Can you please help me understand how you do it? Converting your code to Kitti will be really useful as many different codebases that do 3D object detection and depth estimation use the KITTI format.
I agree with you that a kitti converter would be useful. You may try one from Yao-Shao for now. Hopefully he can help you to resolve the issue in his code.
Our projection code is here: https://github.com/waymo-research/waymo-open-dataset/blob/master/third_party/camera/camera_model.h
Hi. I looked at your code and you seem to be storing the intrinsic and extrinsic parameter in the same way as in kitti. I did that but it is still not correct as I am unable to get 3D bounding boxes at the correct position. Do you think one would need to do special manipulation in the matrices to make it compatible with the kitti axes?
we are just following the standard definition of the camera intrinsics. I think you can try to use our projection code directly first (this makes sure you read all information correctly) and then debug the conversion code.
I posted an example code here. https://github.com/waymo-research/waymo-open-dataset/issues/24#issuecomment-535718632
You may check out my toolkit that adapts the differences between the two datasets, with visualization confirming that the tool works properly. It also provides a tool to convert KITTI-format prediction results back to Waymo-format.
Hello. I really like your dataset and I was hoping you could provide a script to convert your data to the kitti format. For the moment, I found this which sort of helps: https://github.com/Yao-Shao/Waymo_Kitti_Adapter
But the issue is that because you use different reference axis, in both global and vehicle frame, the intrinsic matrix looks different. In kitti, the intrinsic matrix looks like:
But, in the above code I man using, the author apparently has to shift the columns. As a result, the intrinsic matrix looks quite different. Here is the piece of code that is used to do that (adapter.py script from the above GitHub repo):
Here is an example output of the above code to illustrate the issue:
As you can see, the third column becomes the first one, the first one becomes the second one and the second one becomes the third one (with a minus sign added in to the two columns with single entries). I decided to use this approach and apparently this doesn't even give the right 3D bounding boxes as can be seen here: https://github.com/Yao-Shao/Waymo_Kitti_Adapter/issues/3
I was hoping you could shed some light on this. This is because the camera calibration matrix needs to have the same format as in kitti and this apparently doesn't do that and I cannot figure out how.