tusen-ai / SimpleTrack

MIT License
333 stars 63 forks source link

Run on own data or Kitti #6

Closed russellyq closed 2 years ago

russellyq commented 2 years ago

Hi thanks for your great works ! @winstywang @ziqipang

I am trying to implement on my own dataset and kitti dataset. The detection results output "X, Y, Z, H, W, L, theta" and so does KITTI datasets.

As I see in your code that MOTModel updates the detection and return tracking results. https://github.com/TuSimple/SimpleTrack/blob/main/mot_3d/mot.py

How could I implement on own data with only detection output "X, Y, Z, H, W, L, theta" ?

Also, how to understand the use of ego motion in the code ?

Thanks.

wenyiwangbst commented 2 years ago

Hi, by any chance you are able to extract the labels from the .bin label file with the gt_bin_decode.py code?

ziqipang commented 2 years ago

@russellyq Hi, thanks for asking.

  1. You are correct on using the API of MOTModel.frame_mot to run on each frame. The input to this function is FrameData. To get a better sense of what these things are, I think our demo is helpful.
  2. In our code, ego motion refers to the pose or the location of the ego-vehicle in the global coordinate. It is generally represented by a 4-by-4 matrix. It is provided in all the benchmarks and is widely used in multi-frame LiDAR detection, such as CenterPoint.
russellyq commented 2 years ago

@ziqipang Thanks for your reply.

Does that mean I can use the ego motion from other benchmark as Waymo for my own detection result which only has 'X,Y,Z,theta,H,L,W' ?

ziqipang commented 2 years ago

@russellyq Sorry, I haven't understood your question. Maybe you can elaborate a little more, like what do you want/don't want ego-motion for?

ziqipang commented 2 years ago

Close due to inactivity.

zjwzcnjsy commented 2 years ago

你好,我已经接收到你的邮件!

12w2 commented 4 months ago

Hi thanks for your great works ! @winstywang @ziqipang

I am trying to implement on my own dataset and kitti dataset. The detection results output "X, Y, Z, H, W, L, theta" and so does KITTI datasets.

As I see in your code that MOTModel updates the detection and return tracking results. https://github.com/TuSimple/SimpleTrack/blob/main/mot_3d/mot.py

How could I implement on own data with only detection output "X, Y, Z, H, W, L, theta" ?

Also, how to understand the use of ego motion in the code ?

Thanks.

hello. Do you replicate it on your own dataset

mx2013713828 commented 2 months ago

@russellyq Hello, did you work it using kitti format result?