Open LuizGuzzo opened 2 weeks ago
When using basic.ai, we will get the camera position from your camera's external matrix. You can check if the camera position in the camera matrix is correct.
Hello, I’d like to update with a proof of concept that demonstrates the accuracy of my sensor transformation matrices, which I believe are set up correctly. Despite this, when testing in Xtreme1, the camera appears in an unexpected, seemingly random position.
Problem Description
I am encountering issues when setting up the camera transformation relative to the LiDAR point cloud in Xtreme1 (after transitioning from basic.ai). The setup should allow for precise visualization of the camera and LiDAR point cloud together in the scene, but in Xtreme1, the camera appears misaligned.
Configuration and Procedure
Sensor Positions and Orientations:
Transformation Process:
Verification Using Custom Code:
Testing in Xtreme1
I created a test case that can be easily uploaded to Xtreme1:
camera_config
- Contains the camera’s configuration and pose information.camera_image_0
- Stores the images captured by the camera.lidar_point_cloud_0
- Contains the LiDAR point cloud data, already transformed.Scene_0.zip
. Xtreme1 should recognize and interpret this automatically.Current Issue in Xtreme1
Despite the transformations appearing correct locally, when uploaded to Xtreme1, the camera is displayed in an incorrect position. It does not match the expected alignment seen in local testing, and I am unsure of the cause.
Any assistance in troubleshooting this would be greatly appreciated. Please let me know if additional details or access to my code would help clarify the setup.
here is the zip with the files and the code inside it. Scene_test.zip
And this is the result inside the xtreme1, completely uncalibrated.
It's worth noting that I applied a rotation to the camera to correct the camera's perspective with the LiDAR, but this only positions the rotations to the right side, the XYZ (i.e. the translation) is not affected.
I'm trying to correctly add the camera's extrinsic matrix to calibrate the lidar and camera, but when going to xtreme1 (or basic.ai) it always gives me a really crazy position... I did the test and the cross on the right is the transformation where it should be the camera, however when using it it delivers the camera in a very random position.. and I have no idea how to debug this to make it work correctly, does anyone have any suggestions? Below is the image (I used the print from basic.ai because it shows the camera's line of sight, but the results are the same with the xtreme1)
As you can see the cross on the right and the camera line on the left, it is below the ground and in the opposite direction.. (the other crosses are other sensors)
And here is the plot made by another program using the json that I passed to xtreme1/basic.ai showing the correct camera plot.