Open atyshka opened 3 years ago
To correct azimuth, you can generate range image in sensor frame and then roll the range image by sensor yaw which is the offset w.r.t. the forward x-axis in the vehicle frame. To get sensor yaw from extrinsic, you can refer to this which is where the atan2(..., ...) is from in the code.
@peisun1115 That procedure is exactly what my code is doing. Using range_image_utils.compute_range_image_polar, that calculates in sensor frame and then subtracts the extrinsic to transform to vehicle frame. Then I am rolling the image to account for that yaw. My issue is that the resulting images do not appear properly centered
That seems to be an issue in your code? You can debug it by starting with several points to make sure you code is correct. and then visualizing the point cloud.
@peisun1115 I did some experiments with the point cloud in tensorflow. In the first image, I colored the image using the azimuth outputted by polar_image, which has the correction factor applied:
polar_image = range_image_utils.compute_range_image_polar(tf.expand_dims(r[..., 0], 0), tf.expand_dims(e, 0), tf.expand_dims(tf.reverse(i, [-1]), 0))
azimuth = polar_image[0, ..., 0]
#Normalize image between -pi and pi
azimuth = tf.math.atan2(tf.math.sin(azimuth), tf.math.cos(azimuth))
You can clearly see that the azimuth roll-over point is about 45 degrees to the left of the front of the vehicle. This is not what I would expect.
Now, I ran another experiment, where I removed the azimuth correction. I.e. polar image subtracts the correction, so I perform an addition to cancel it out. The code for that looks like this:
polar_image = range_image_utils.compute_range_image_polar(tf.expand_dims(r[..., 0], 0), tf.expand_dims(e, 0), tf.expand_dims(tf.reverse(i, [-1]), 0))
azimuth = polar_image[0, ..., 0]
correction = tf.atan2(e[..., 1, 0], e[..., 0, 0])
azimuth = azimuth + correction
azimuth = tf.math.atan2(tf.math.sin(azimuth), tf.math.cos(azimuth))
And the resulting pointcloud looks like this:
You can see here that the edges of +pi and -pi are directly behind the vehicle. Thus it seems that the azimuth is more correct without the extrinsic calibration applied.
If I print out the actual correction angle, I get 2.5887 radians. That seems like an incredibly odd angle to mount the top lidar with respect to the vehicle, and my guess is that is where the issue lies?
I have created a minimal reproducible Colab notebook with this experiment if you would like to give it a try: https://colab.research.google.com/drive/1wc9labdFgP2E0v9qmfbKuMxorfP3QSHY?usp=sharing
@peisun1115 This seems to relate to #20 , which says that the result of compute_range_image_polar is in sensor frame rather than vehicle frame. However, that function calculates an azimuth and then subtracts the extrinsic yaw from it. My interpretation then is that the raw images from the proto ARE corrected and natively stored in vehicle frame. Then compute_range_image_polar un-corrects them so that they go from vehicle frame -> sensor frame. Then compute_range_image_cartesian re-corrects them to be in vehicle frame again. This would explain my test results. Can you please clarify if this is the correct interpretation? I only ask because it seems contrary to the documentation that said
Note that an azimuth correction is needed to make sure the center of the image corresponds to the +x-axis.
If the raw frame data is in vehicle frame rather than sensor frame, then no azimuth corrections ought to be necessary to center it within the frame.
My use case: I'm using a range-view based detector that requires the middle of the image be the forward direction of the vehicle. I noticed in the range image code that a correction is applied to the azimuth of the range image based on the camera intrinsic. I'm using the following code to compute and plot the range image:
Here is the azimuth image:
Here is the intensity image:
Note the discontinuity in the azimuth image, it does not go from -pi to +pi since the intrinisic correction is applied.
So, I presumed, the image must be corrected so that the azimuth goes from -pi to +pi for everything to be properly centered. I modified the transform() function like so:
This uses a tf.roll operation to take into account the extrinsic and roll the range image so that it is now in the -pi to +pi range. Now here are the azimuth and intensity images:
Note that the azimuth now looks correct, but the intensity image does not appear centered on the road. The first, uncorrected intensity image seems more centered on the road. Can someone familiar with the data format (@peisun1115) please clarify whether or not I am correct in applying this roll transformation?
From the Waymo Website:
The phrasing of this makes it sound like I need to do a roll transformation to re-center the image. And yet, performing such a roll operation does not lead to a centered image.