Closed taeyeopl closed 4 years ago
Hi,
For Q1: In our paper, we introduced our simple temporal model as:" 6-PACK where the predicted pose in the next frame extrapolates from the last estimated inter-frame change of pose (constant velocity model)". To achieve this idea, you can simply add the previous predicted inter-frame pose changes best_t
to the generated anchor
for the next frame. You can add this one line code in either network/eval_forward
or dataset/get_one
.
For Q2: Thanks for mention this point. Since 6-PACK is a tracking model, the performance might vary between different trials. This is why we tested our model for 5 trials and regard the mean score as the final evaluation result. Your number is the result of Trial: TEMP_50
. We also released TEMP_51, 52, 53, 54
, which are the other 4 trials. To calculate the mean score, you can simply change the pred_list
in benchmark.py
to pred_list = [50, 51, 52, 53, 54]
. Then you will receive the score we reported.
Thanks for sharing good work. I have some simple questions about your performance.
Q1. Can I ask how to implement the with temporal prediction?? As I understand, "eval.py" was implemented without temporal prediction. I want to compare w/o, w temporal prediction performance.
Q2. I tested twice using pretrained weight and downloaded your onedrive folder. (https://drive.google.com/file/d/1WTarlYvObx5S6kPcGYP0k0KRvlrBCYET/view?usp=sharing)
Those performance were similar but some objects(bowl, laptop) are not close in the paper in the 5 degree and 5cm metric. Maybe random noise effect some results, but some object performance gap were high than I expected. Could you share your opinion about why bowl and laptop are lower than paper report??