MoonBlvd / tad-IROS2019

Code of the Unsupervised Traffic Accident Detection paper in Pytorch.
MIT License
164 stars 39 forks source link

HEV-I data processed information #9

Closed srikanthmalla closed 5 years ago

srikanthmalla commented 5 years ago

Hi Brian,

  1. When I print the processed data information for HEV-I (from the your .pkl data): session, frame_id, flow, ego_motion, bounding box 201806061148002559_432.pkl (82,) (82, 5, 5, 2) (92, 3) (92, 4) why are they not equal?

  2. for the ego motion: did you use CAN information for [yaw, tx, ty] for HEV-I? and did what are the units, if not?

MoonBlvd commented 5 years ago

@srikanthmalla

  1. because the ego_motion and bounding box include future 10 frames ground truth, used for training and evaluation. We didn't need flow from 82 to 92 frames so we didn't prepare them. This is not perfect but this is what we did last year.

  2. I didn't use CAN. I used ORB-SLAM and then wrote a script to compute up-to-scale [yaw, tx, ty]. I would suggest you to use CAN because it's available (it was not...) and it's apparently more accurate. Please let me know if you try CAN :)

srikanthmalla commented 5 years ago

Thanks Brian, Regarding 2, what do you mean by compute up-to-scale? is it in normalized coordinates (the usual monocular odometry way)?

MoonBlvd commented 5 years ago

@srikanthmalla Yes, the usual monocular odometry way. I used ORB-SLAM output without any change.