Closed windzu closed 6 months ago
What branch are you using? fusion
branch is recommended since you are already using some complicated features.
GPS pose is used to translate lidar/objects into a unified world coordinate system so the object trajectory looks more reasonable and errors can be identified more easily. For this to work, earlier version needed gps pose to be present in scene-xxx/ego_pose
folder, one json file for each frame
{
// 11233.json
// the following data are needed.
"roll":"0.726997418",
"pitch":"1.896869674",
"azimuth":"159.767685853",
"x":"191028.48721109523",
"y":"2501733.928510412",
"z":"32.9623"
}
For newest version (fusion
branch), a folder with name lidar_pose
is used
{
"lidarPose": [
0.9999933534514654,
0.002919647412897125,
-0.0021837380551317405,
0.011038699351074599,
-0.002912585952861278,
0.9999905423236838,
0.0032298771287139257,
-6.6359171117270375,
0.0021931475044468674,
-0.0032234953363947027,
0.9999923995620409,
0.09420903702002104,
0.0,
0.0,
0.0,
1.0
]
}
Lidar pose is a 4*4 matrix, when the point cloud of each frame is transformed by this matrix, they all should match each other. we did this because gps/utm contains/accumulates errors, we could use lidar to refine the pose (actually we used a point cloud registration alg to generate lidar_pose).
For stationary objects, if lidar_poses are accurate enough, the boxes can be just interpolated or copied to other frames and all done. this solves stationary obj annotation problem better.
What branch are you using?
fusion
branch is recommended since you are already using some complicated features.GPS pose is used to translate lidar/objects into a unified world coordinate system so the object trajectory looks more reasonable and errors can be identified more easily. For this to work, earlier version needed gps pose to be present in
scene-xxx/ego_pose
folder, one json file for each frame{ // 11233.json // the following data are needed. "roll":"0.726997418", "pitch":"1.896869674", "azimuth":"159.767685853", "x":"191028.48721109523", "y":"2501733.928510412", "z":"32.9623" }
For newest version (
fusion
branch), a folder with namelidar_pose
is used{ "lidarPose": [ 0.9999933534514654, 0.002919647412897125, -0.0021837380551317405, 0.011038699351074599, -0.002912585952861278, 0.9999905423236838, 0.0032298771287139257, -6.6359171117270375, 0.0021931475044468674, -0.0032234953363947027, 0.9999923995620409, 0.09420903702002104, 0.0, 0.0, 0.0, 1.0 ] }
Lidar pose is a 4*4 matrix, when the point cloud of each frame is transformed by this matrix, they all should match each other. we did this because gps/utm contains/accumulates errors, we could use lidar to refine the pose (actually we used a point cloud registration alg to generate lidar_pose).
For stationary objects, if lidar_poses are accurate enough, the boxes can be just interpolated or copied to other frames and all done. this solves stationary obj annotation problem better.
Thank you very much for your patient answer. My question has been resolved
Dear author, first of all, I want to express my sincere gratitude for your contribution. SUSTechPOINTS is an extremely extensible annotation tool.
However, I am currently encountering an issue that affects the efficiency of annotation: the tracking efficiency is not very high.
Let me describe my usage: I create an instance annotation box in multiple frames by copying and pasting in consecutive frames, then use the Auto function for multi-frame tracking. Next, I correct some incorrect automatic annotation results, then continue with Auto, and so on.
But I found that if the vehicle is stationary and the target is also stationary, Auto cannot handle this situation well. It seems that there are some problems with the tracking method it uses. Another situation I found is that there is a GPS option in the annotation options, but in fact, I did not find where to provide additional GPS output.
To solve the above questions, I looked at the source code and now have the following questions. I hope to get your answers, which can help me save a lot of time reading the source code.
But because I haven't looked at all the source code, I want to ask the author if there are similar features currently?