ouster-lidar / ouster_example

Ouster, Inc. sample code
Other
448 stars 432 forks source link

Z value Negative #147

Closed ethan20120 closed 4 years ago

ethan20120 commented 4 years ago

Hi, Can anyone please guide me here I am getting the negative z values. My set up is mobile Lidar. we are mounting the lidar on the top of the car.How can I address this ?

Screenshot from 2020-01-21 11-43-43

Thanks

zuzi-m commented 4 years ago

What are you expecting in Z values? The XYZ coordinates of points in the point clouds you are getting from the lidar are in the Lidar Coordinate Frame that is described in the software user guide (section 4 Coordinate Frames explains the coordinate frames in detail)

This means that negative values in Z axis are no surprise, especially if you are mounting the lidar unit above the ground (e.g. on top of a car). Since the Z coordinate axis in the lidar coordinate frame is positioned upwards and its origin is "defined by the position of the lidar lens aperture stop in the sensor" (as cited from the software user guide, page 23, this simply means that the coordinate system originates in the middle of the sensor unit), everything that is below the origin of the Z axis will be assigned a negative Z coordinate value.

Does this explain the negative values sufficiently? When working with 3D data such as point clouds, it is important to understand coordinate systems that the data is in.

ethan20120 commented 4 years ago

Hi Zuzi,

Thanks for your explanation on the sensors configuration.

Please correct me if I am wrong on this. According to the software manual, the data is coming in east north up coordinate frame. In cartesian coordinate system the Z value is in meter(m). Now, if the height of my vehicle is 1.616m and if I add the z translation of the sensor which is 36.18 mm or 0.03618 m then I get all together (1.65218m) as the base height of my sensor. Now, If I want to find the height of the first point from the picture which I add in my earlier comment which is (-7.624) , so according to you the height of this point will be (1.65218+7.624)= 9.27618 m ?

Can you please correct me if I am wrong here ? Any help will be very appreciated.

Thanks Ethan

ethan20120 commented 4 years ago

Hi @zuzi-m ,

Just to clear up the process which I am applying to retrieve the X,Y,Z,noise, ring, reflectivity, range attribute out of the sensor. At first I am saving my cloud in the rosbag format. Then I am running the ros perception node "rosrun pcl_ros pointcloud_to_pcd input:=/os1_cloud_node/points" in one terminal and in the other terminal I am playing my rosbag. That's how I am getting the attributes of the sensors and later I am using the PDAL platform to convert the PCD files into the csv file or any file format according to my necessity.

Thanks

zuzi-m commented 4 years ago

Thanks for clearing up the process, although I am still not sure what coordinate frame your data is in, because I don't know if there are any transformations applied in the process (e.g. when converting the rosbag to PCD file or the PCD file to CSV).

However, if you don't use any transformations on the way, then the values in your CSV are in the Lidar Coordinate Frame as it comes from the os1_cloud_node.

To transform the data (XYZ coordinates of points) from the Lidar coordinate frame to a different, new coordinate frame (let's call it the Vehicle coordinate system), you have to do following:

  1. define where the origin of Vehicle coordinate frame should be, and how its axes are oriented, in all three dimensions (XYZ) (should it be on the ground below the sensor? on the ground below the center of the vehicle? in the center of mass of the vehicle?)
  2. determine the position and orientation of the origin of the Vehicle coordinate frame with relation to Lidar coordinate frame
  3. create a transformation to move the origin Lidar coordinate frame to the origin of the Vehicle coordinate frame
  4. apply this transformation to data correctly

In your case:

  1. I will assume that you want your Vehicle coordinate frame to have its origin on the bottom of the vehicle (the ground), right under the center of your lidar unit that is mounted on the vehicle, and you want the frame to be oriented in the same way as the Lidar coordinate frame
  2. Because the orientation is the same, and the origin of Vehicle coordinate frame is right under the origin of the Lidar coordinate frame, the position and orientation of the Vehicle coordinate frame with relation to Lidar coordinate frame is zero in everything (XY, pitch, yaw, roll) except for the Z coordinate, which will be (if the lidar is mounted directly on top of the vehicle without the heat sink plate) -1.616m (height of the vehicle) +(-0.03618m) (distance between the base of the lidar unit and the origin of Lidar coordinate frame) = -1.65218m
  3. The transformation to move the origin of Lidar coordinate frame to the origin of the Vehicle coordinate frame is the inverse of the position measured in step 2. (in our case there is only a translation along Z axis, and no rotation). You can construct a 3D transformation matrix that does translation of 1.65218 in Z axis.
  4. To transform the points in Lidar coordinate frame into the Vehicle coordinate frame, multiply the points by the matrix created in step 3. For the first point it will do following to the Z coordinate: -7.624 + 1.65218 = -5.97182 This effectively means that the point you measured is 5.97182m below the origin of the Vehicle coordinate frame (or 7.624m below the origin of the Lidar coordinate frame) which means below the bottom of the vehicle, which would mean below the ground if the vehicle was perfectly level, on a perfectly flat ground. In real world though, the vehicle will not be perfectly level with relation to the ground, and will move along an uneven surface, so it is not unexpected to see negative Z-axis values in the data.

There are many resources about coordinate frames and transformations, especially related to ROS, that you can use to help you understand how and why the transformations are important and how they are used. I think it is crucial to understand this before working with sensor data like this and integrating it with other parts such as a vehicle. Some resources you might want to look at:

Hope this response was not too long, and was clear enough to help you.

ethan20120 commented 4 years ago

Hi @zuzi-m

Thank you very much for taking your valuable time to explain this.

However, if you don't use any transformations on the way, then the values in your CSV are in the Lidar Coordinate Frame as it comes from the os1_cloud_node.

To address this part. I would say my data is coming in Os1_sensor Coordinate frame which is 180 degree opposite to OS1_LiDAR Coordinate frame. I think when I am generating the PCD files using the "rosrun pcl_ros pointcloud_to_pcd input:=/os1_cloud_node/points" node, it is converting the OS1_LiDAR coordinate frame into the os1_sensor coordinate frame.

image

image

This is coming from generating the shape file from the pcd file which indicates that its in the os1_sensor coordinate frame

Annotation 2020-01-23 184421

Annotation 2020-01-23 185405

Please help me here so in terms of measuring the height of the points with positive z value (e.g. 0.469m) it would be vehicle height + z translation+ Z value(1.65218m+0.469m)=2.12118m, and this means the point is 2.12118m above from the ground while for a negative z value like (-7.624m) the height of the object is (-7.624 + 1.65218) = -5.97182 which indicates the point is 5.97182 m down the vehicle ? But the height of the vehicle + the Z translation is 1.65218m above from the ground. And I am collecting data on a plain surface so how a point can be located at a less height compared to the height of the sensor ? I mean the height of my sensor including the car and the z translation is 1.65218; my sensor is 1.65218 m above the ground. Now if an object is below the height of my car it should be less than 1.65218 m right?

Sorry for this lengthy comment. But I think it will help for future retrospectives.

Thanks for your time.

Best Ethan

zuzi-m commented 4 years ago

Yes, your expectations are right, if a Z coordinate of a point in the Lidar coordinate frame is 0.469m then it is at 2.12118m in the Vehicle coordinate frame (this frame has origin on the ground under the lidar unit so it means the point is 2.12118m far from the origin of the Vehicle coordinate frame along the Z axis). But this is all with respect to the origin of the Vehicle coordinate frame, which is aligned with the vehicle, and not the surrounding world. And a car is sitting on rubber tires and suspension, which means it is not fixed. The height of your vehicle can be 1.616m at one point, but wile it is moving, this might not be true. Moreover, as the car moves, its orientation changes - it pitches when accelerating and slowing down, it can roll slightly when turning, it can move across a road that is not level, it can be loaded differently which shifts the balance of the vehicle... All this means that even if you are on a perfectly flat surface (which is never the case in real world anyway), there will be different orientation of the Vehicle coordinate frame with respect to the ground.

If you still wonder how there can be a point in your data that appears to be over 7 meters under the lidar and almost 5 meters under the 'ground', you have to see the bigger picture: The point from the data has following coordinates: 61.326, 41.449, -7.624 It is 7 meters below the lidar, but it is also 61 meters away in the X direction and 41 meters in the Y direction, which means this point is actually around 70 meters away from your sensor. If you use basic calculations for right triangles, you can see that if you have a point 70 meters away from the sensor and 7 meters below the sensor, it is only 5 degrees below the horizon. Therefore, if you are on a perfectly flat ground, it is enough if the car rotates 5 degrees and some points that are far enough will have values that don't seem right on the first sight. It's all a matter of different coordinate views, and their position/orientation with respect to each other.

In general, to see what your data looks like and to validate that you are measuring/transforming data correctly, it is often helpful to just visualize the data and try to see if you can recognize objects in it correctly in the visualization. You can even make experiments where you use distinctive known objects at known locations (usually its helpful to put them along expected axes of your coordinate system), then visualize the data and validate that you recognize the expected objects on their expected locations.

ethan20120 commented 4 years ago

Excellent. Thank you very much @zuzi-m .