xtreme1-io / xtreme1

Xtreme1 is an all-in-one data labeling and annotation platform for multimodal data training and supports 3D LiDAR point cloud, image, and LLM.
https://www.basic.ai
Apache License 2.0
889 stars 142 forks source link

camera extrinsic matrix plot problem #292

Open LuizGuzzo opened 2 weeks ago

LuizGuzzo commented 2 weeks ago

I'm trying to correctly add the camera's extrinsic matrix to calibrate the lidar and camera, but when going to xtreme1 (or basic.ai) it always gives me a really crazy position... I did the test and the cross on the right is the transformation where it should be the camera, however when using it it delivers the camera in a very random position.. and I have no idea how to debug this to make it work correctly, does anyone have any suggestions? Below is the image (I used the print from basic.ai because it shows the camera's line of sight, but the results are the same with the xtreme1)

image

As you can see the cross on the right and the camera line on the left, it is below the ground and in the opposite direction.. (the other crosses are other sensors)

image

And here is the plot made by another program using the json that I passed to xtreme1/basic.ai showing the correct camera plot.

guhaomine commented 1 week ago

When using basic.ai, we will get the camera position from your camera's external matrix. You can check if the camera position in the camera matrix is ​​correct.

LuizGuzzo commented 1 week ago

Hello, I’d like to update with a proof of concept that demonstrates the accuracy of my sensor transformation matrices, which I believe are set up correctly. Despite this, when testing in Xtreme1, the camera appears in an unexpected, seemingly random position.

Problem Description

I am encountering issues when setting up the camera transformation relative to the LiDAR point cloud in Xtreme1 (after transitioning from basic.ai). The setup should allow for precise visualization of the camera and LiDAR point cloud together in the scene, but in Xtreme1, the camera appears misaligned.

Configuration and Procedure

  1. Sensor Positions and Orientations:

    • I have set up the following positional and rotational data for each sensor relative to a shared reference frame on the truck:
      • LiDAR0:
      • Position (x, y, z): (0.11, 0.0, 0.19)
      • Orientation (roll, pitch, yaw): (0.0231, 0.000679, -0.00151)
      • Camera (Intelbras3):
      • Position (x, y, z): (0.55, -1.25, -1.9)
      • Orientation (roll, pitch, yaw): (0.0, -0.03, 0.06)
      • Sensor Board 1:
      • Position (x, y, z): (4.8, 0.0, 2.61)
      • Orientation (roll, pitch, yaw): (0.0, 0.03, 0.0)
  2. Transformation Process:

    • I generated transformation matrices by chaining these poses. First, I applied the sensor board transformation to the truck’s coordinate frame. Then I multiplied this by the LiDAR transformation matrix to obtain the final LiDAR position in the truck’s coordinate system.
    • The same method was used for the camera, ensuring that all transformations are relative to the same shared reference frame on the truck.
  3. Verification Using Custom Code:

    • I implemented a Python script that takes these raw data (point clouds and sensor poses) and performs all necessary transformations. The script plots the resulting sensor positions and orientations together with the point cloud, showing consistent and accurate alignment.
    • I have added flags to toggle between using the raw point cloud and pre-transformed data intended for Xtreme1, allowing a direct comparison of the configurations. When running locally, all sensors appear correctly positioned and aligned with the point cloud data.

Testing in Xtreme1

I created a test case that can be easily uploaded to Xtreme1:

Current Issue in Xtreme1

Despite the transformations appearing correct locally, when uploaded to Xtreme1, the camera is displayed in an incorrect position. It does not match the expected alignment seen in local testing, and I am unsure of the cause.

Any assistance in troubleshooting this would be greatly appreciated. Please let me know if additional details or access to my code would help clarify the setup.

here is the zip with the files and the code inside it. Scene_test.zip

And this is the result inside the xtreme1, completely uncalibrated. image

It's worth noting that I applied a rotation to the camera to correct the camera's perspective with the LiDAR, but this only positions the rotations to the right side, the XYZ (i.e. the translation) is not affected.