Henry1iu / TNT-Trajectory-Prediction

A Pytorch Implementation of TNT: Target-driveN Trajectory Prediction
487 stars 92 forks source link

Why all candidate targets share the same offset in target prediction stage? #14

Closed Joe12138 closed 2 years ago

Joe12138 commented 2 years ago

Hi, in your code, I noticed that all candidate targets share the same offset in target prediction stage. However, in the TNT paper, it says: "For each target candidate, the TNT target predictor produces a tuple of ." Could you tell why to do this?

Henry1iu commented 2 years ago

Hi,

There will be one particular offset for each candidate target. I'm not sure where I used the same offset for all candidates. Could you please identify where this issue happened?

Best Regards, Jianbang

Joe12138 commented 2 years ago

Hi, In the file util/preprocessor/base.py, there is a method named get_candidate_gt. And the output of offset is offset_xy = gt_target - target_candidate[gt_index]. When I run this method, offset_xy only has the offset between best target candidate and gt. Maybe I am wrong with that. Could you check this and reply me? Thank you very much! Best, Joe

Henry1iu commented 2 years ago

Hi,

Your understanding of my code is right. This part calculates the ground truth of the target position anchor and the corresponding offset. This offset gt will be the supervision for the offset predictor head.

The content you mentioned is talking about the prediction phase. According to my understanding, during the training phase, only the offset associated with the gt target candidate acts as the supervision. Are you suggesting that all the offsets associated with all the target candidates (not the gt one) should be involved during training?

Best Regards, Jianbang

Joe12138 commented 2 years ago

Hi, In my understanding, the gt_index is the index of the closest candidate target, named as cct, to the gt one. Therefore, the offset prediction network maybe tend to predict the offset between cct and gt one. Thus, other candidate target except cct plus this offset may be far away from the gt one. However, in target prediction stage, the result, which we used, is candidate target + offset.

So, in my opinion, the gt offset should not only have one. The number of the gt offset should be equal to the number of candidate target. In other word, it should be the offset between gt one and each candidate target.

I don't know it is right or not. Since you must understand more than me. Anyway, please tell me your thought. Thank you very much! Best, Joe

Henry1iu commented 2 years ago

Hi,

My implementation follows the idea of Fast-RCNN. The regression of the offset prediction head only involves the offset of the positive candidate. In my case, the only positive candidate is the 'cct'.

But, I did find the offset losses change little after training. Maybe, we can do some experiments to find whether more positive candidates can lead to higher accuracy. Please let me know if you find more detail. :)

I can't guarantee my understanding is correct since I'm not the author of this paper. I have tried to contact the authors. However, the first author refused to provide more detail except those described in the paper due to the so-called patent issue. (LOL) You can ask this question to the authors to see if they will reveal the truth to you.

Best Regards, Jianbang

Joe12138 commented 2 years ago

Hi, I see. In future, I will do some experiments to compare these two ways. Thank you for your nice reply! Best, Joe

Henry1iu commented 2 years ago

Hi Joe,

Good luck with your experiments. I'm interested in your result, also. If you find anything, please do share it with me! :)

Best Regards, Jianbang