Closed russellyq closed 2 years ago
Hi, by any chance you are able to extract the labels from the .bin label file with the gt_bin_decode.py code?
@russellyq Hi, thanks for asking.
ego motion
refers to the pose or the location of the ego-vehicle in the global coordinate. It is generally represented by a 4-by-4 matrix. It is provided in all the benchmarks and is widely used in multi-frame LiDAR detection, such as CenterPoint.@ziqipang Thanks for your reply.
Does that mean I can use the ego motion
from other benchmark as Waymo for my own detection result which only has 'X,Y,Z,theta,H,L,W' ?
@russellyq Sorry, I haven't understood your question. Maybe you can elaborate a little more, like what do you want/don't want ego-motion for?
Close due to inactivity.
你好,我已经接收到你的邮件!
Hi thanks for your great works ! @winstywang @ziqipang
I am trying to implement on my own dataset and kitti dataset. The detection results output "X, Y, Z, H, W, L, theta" and so does KITTI datasets.
As I see in your code that
MOTModel
updates the detection and return tracking results. https://github.com/TuSimple/SimpleTrack/blob/main/mot_3d/mot.pyHow could I implement on own data with only detection output "X, Y, Z, H, W, L, theta" ?
Also, how to understand the use of ego motion in the code ?
Thanks.
hello. Do you replicate it on your own dataset
@russellyq Hello, did you work it using kitti format result?
Hi thanks for your great works ! @winstywang @ziqipang
I am trying to implement on my own dataset and kitti dataset. The detection results output "X, Y, Z, H, W, L, theta" and so does KITTI datasets.
As I see in your code that
MOTModel
updates the detection and return tracking results. https://github.com/TuSimple/SimpleTrack/blob/main/mot_3d/mot.pyHow could I implement on own data with only detection output "X, Y, Z, H, W, L, theta" ?
Also, how to understand the use of ego motion in the code ?
Thanks.