lvwj19 / PPR-Net-plus

PPR-Net++: Accurate 6D Pose Estimation in Stacked Scenarios
Apache License 2.0
36 stars 6 forks source link

How to calculate the AP value as shown in paper #11

Closed 4-0-4-notfound closed 2 years ago

4-0-4-notfound commented 2 years ago

Hello, i want to reproduce your result in the paper. The evaluation tool needs r , t, and score to get the final AP result. I want to know how to get these values from your code. It seems the cluster_mat_pred cluster_center_pred is the rand t, but how to get the score?

Could you provide the example code?

lvwj19 commented 2 years ago

Thanks for your attention. The score is the predicted visibility.

4-0-4-notfound commented 2 years ago

1、for calculating the AP, is it need to filter the vs_picked_idx = pred_vis_val > 0.45?

2、And what is pred_conf_val used for?

3、which kind of projectionType is used for generetate GT, orthogonalor perspective?

4、it seem the t is need to scale 1000 to get AP? self.dataset['data'] *= scale (1000)

lvwj19 commented 2 years ago
  1. Yes, it is needed and the threshold can be adjusted by yourself. pred_vs_val is used to filter the ungraspable instance, i.e., score.
  2. pred_conf_val is used to filter the point-wise prediction before voting and you can refer to our journal paper.
  3. We used perspective.
  4. You can check it and just keep the predictions and labels consistent on the scale.
4-0-4-notfound commented 2 years ago

Thanks for your response. So pred_conf_val is not involved in calculating AP?

lvwj19 commented 2 years ago

Yes, you can ignore it if you haven't trained the conf branch, because it is the post-processing optimization.

4-0-4-notfound commented 2 years ago

Thanks a lot

lvwj19 commented 2 years ago

There are the codes that you can refer to.

   # filter objects with vs lower than vs_threshold
    vs_threshold = 0.45
    pred_results = list(zip(pred_vs_cluster, cluster_center_pred, cluster_mat_pred))
    pred_results = [ rst for rst in pred_results if rst[0] > vs_threshold ]
    pred_results.sort(key=lambda x:x[0], reverse=True)

    result_list = []
    for rst in pred_results:
        tmp_dict = {}
        tmp_dict['score'] = 1.0 * rst[0] 
        tmp_dict['t'] = (rst[1] / 1000.0).tolist()
        tmp_dict['R'] = rst[2].tolist()
        result_list.append(tmp_dict)