Closed rttariverdi67 closed 3 years ago
Hello,
Could you post some examples of the incorrect output that you have? The code that projects from point cloud to image makes assumptions about the coordinate frame of the point clouds.
Edit: I meant standard format (x forward, y to the left and z pointing up)
@abhijeetshenoi Thank you for your reply, my result image at first was
however after following your comment, i got this for upper
for lower __
the part to generate image and Pointcloud is done like
image = cv2.imread(img_path)
pcd = o3d.io.read_point_cloud(pcd_path)
pointcloud = np.asarray(pcd.points)
pnt = torch.tensor(pointcloud)
velo2ref = OmniCalibration(calib_folder='calib').project_velo_to_ref(pnt)
ref2img = OmniCalibration(calib_folder='calib').project_ref_to_image_torch(velo2ref)
ref2img = ref2img.T[:, 1::10]# one point in 10 points
result = print_projection_plt(points=ref2img, image=image)
but still i'm not sure about the incorrectness of output that i have.
In addition to this, you need to move the velodyne points to the camera coordinates (the velodyne sensors are displaced from the physical camera)
First, if the original point cloud is x forward, y to the left and z upward, we perform the following operations: https://github.com/StanfordVL/JRMOT_ROS/blob/9cbb1e2acf4dff8152f65dd32a1b6f5961fc5125/src/3d_detector.py#L78
At this point, z is facing forward, x is right ward, y is downward (KITTI convention): https://github.com/StanfordVL/JRMOT_ROS/blob/9cbb1e2acf4dff8152f65dd32a1b6f5961fc5125/src/featurepointnet_model_util.py#L515 To debug, you might want to check the extent of LiDAR points in all three dimensions. The dimension along which the range is within +=2m is likely to be the vertical axis, maybe that will help you debug
I checked the extend of Lidar points and here is what I can see,
I use o3d.io.read_point_cloud() to get (x, y, z) of given .pcd file lets call it original point cloud, the range of each dimension is 47.76, 26.18 and 7.09. I think 47.76 is correspond to x, 26.18 is for y and 7.09 is for y so I believe it is in standard format. with this assumption I process this array(x,y,z) (original point cloud)as follows:
First : give this array to move_lidar_to_camera_frame
it changes sign and order of dimensions!
Second : the output of first step goes to project_velo_to_ref
it changes sign and order of dimensions again!
Third : the out put of second step goes to project_ref_to_image_torch
, this gives 2D array like (v, u)
at the end in order to match dimensions of RGB image and result of third step I do transpose so I have (u, v).
So I give Image and (u,v) array to my print_projection_plt
function and got this result.
so in all steps I don't change dimensions! where do I do mistake?!
Could you visualise the point cloud in 3D at each intermediate step? There are plenty of utilities you can use that take .pcd files.
On Fri, Feb 5, 2021 at 3:52 AM, rttariverdi67 notifications@github.com wrote:
I checked the extend of Lidar points and there is what I can see, I use o3d.io.read_point_cloud() to get (x, y, z) of given .pcd file lets call it original point cloud, the range of each dimension is 47.76, 26.18 and 7.09. I think 47.76 is correspond to x, 26.18 is for y and 7.09 is for y so I believe it is in standard format. with this assumption I process this array(x,y,z) (original point cloud)as follows: First : give this array to move_lidar_to_camera_frame Second : the output of first step goes to project_velo_to_ref Third : the out put of second step goes to project_ref_to_image_torch at the end in order to match dimensions of RGB image and result of third step I do transport so I have (u, v). At the end I give Image and (u,v) array to my print_projection_plt function and got this result.
so in all steps I don't change dimensions! [image: Screenshot from 2021-02-05 12-22-41] https://user-images.githubusercontent.com/53265657/107030450-b9e3df80-67c1-11eb-8c09-1941c3f9e7ed.png
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/StanfordVL/JRMOT_ROS/issues/16#issuecomment-773986981, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIBNOJEQAVU7DXTNKQQR2LDS5PLYVANCNFSM4WVEY47A .
-- Best, Abhijeet
Dear @abhijeetshenoi , here are visualizations from each steps: I used a helper points to show the coordinates better:
original ".pcd" file: here format is: x forward, y to the left and z pointing up
first: "move_lidar_to_camera_frame", the format becomes: x forward, y to the downward and z pointing left
second: "project_velo_to_ref", the format stays same: x forward, y to the downward and z pointing left
third: "project_ref_to_image_torch" gives me 2D array(projection) which using function "print_projection_plt" which I mentioned before, I copy these 2D array to RGB image,
however by only changing coordinates of "move_lidar_to_camera_frame" 's result to (-y,-z,x) I got this output:
the result for lower: and for upper:
many thanks for your supports!! bests; Rahim
hi, thank you for this repository, I'm really new in this track and have some questions, I want to project JRDB point clouds to RGB image and have some problems. considering stitched image and its point cloud I want to have something like this, how can I do such projection. I tried
project_ref_to_image_torch
,project_velo_to_ref
,move_lidar_to_camera_frame
and did :but results are not correct. thank you