dahiyaaneesh / peclr

This is the pretraining code for PeCLR. An equivariant contrastive learning framework for 3D hand pose estimation. The paper is presented at ICCV 2021.
https://ait.ethz.ch/projects/2021/PeCLR/
MIT License
83 stars 14 forks source link

Reproducing Numbers #8

Closed Seleucia closed 2 years ago

Seleucia commented 2 years ago

Hello,

Thanks a lot for releasing the code. I'm having trouble to reproduce numbers reported in paper. I'm using your pre-trained models. However i can't get the same numbers that you are reporting:

ResNet-50 + PeCLR
Evaluation 3D KP results:
auc=0.357, mean_kp3d_avg=4.71 cm
Evaluation 3D KP ALIGNED results:
auc=0.860, mean_kp3d_avg=0.71 cm

As you describe, i'm loading your model as following:


import torch
import torchvision.models as models
# For ResNet-50
rn50 = models.resnet50()
peclr_weights = torch.load('peclr_rn50_yt3dh_fh.pth')
rn50.load_state_dict(peclr_weights['state_dict'])
# For ResNet-152
rn152 = models.resnet152()
peclr_weights = torch.load('peclr_rn152_yt3dh_fh.pth')
rn152.load_state_dict(peclr_weights['state_dict'])

And then I'm calling "evaluate" function in the evaluation_utils.py file. I'm evaluating on the FH dataset "test" set. Do you have any other code snippet or something to use evaluation besides evaluation_utils.py. There have been some bugs in this file.

spurra commented 2 years ago

These numbers are from the official FH test set and acquired from the codalab site: https://competitions.codalab.org/competitions/21238

The evaluate function in evaluation_utils.py is used to evaluate on data we have ground-truth for, which is not the case for the test set.

Seleucia commented 2 years ago

Thanks for reply. FH released the GT for test set as well. I'm using GT from official website. Do you have any other code which might help to generate same results. I download GT from here: https://lmb.informatik.uni-freiburg.de/resources/datasets/FreihandDataset.en.html

spurra commented 2 years ago

I did not know that. Thanks for updating me on this matter. I need to check how we produce the results for codalab which may take some time as I currently do not have access to the computer. In the mean time, I would double check on your end that you are evaluating in the exact same manner as FH does on codalab.

Seleucia commented 2 years ago

How do you obtain 3D pose predictions for evaluation of the FH dataset: I'm following this strategy: I'm ensuring that i'm not applying any data augmentation. I'm using "prepare_supervised_sample" function in data_set file to get samples (images and GT).

After forward pass in model, i'm calling following function: predictions_3d = convert_2_5D_to_3D(predictions, scale, camera_param, True) convert_2_5D_to_3D is in src.data_loader.utils. Input parameters are:

I also try other options:

Do you use_palm True or False? I see default value is false. I tried both options, when i set it True and run the code I'm getting a better results. Here is the my best results:

{'Mean_EPE_2D': tensor(13.1927), 'Median_EPE_2D': tensor(9.0325), 'Mean_EPE_3D': tensor(0.4832), 'Median_EPE_3D': tensor(0.3617), 'Median_EPE_3D_R_V_3D': tensor(1.2209e-07), 'AUC': 0.3777584816151982, 'Mean_EPE_3D_procrustes': tensor(0.0230), 'Median_EPE_3D_procrustes': tensor(0.0194), 'auc_procrustes': 0.9536341087054506}

Here is the Full Code: Loading model:

from src.models.rn_25D_wMLPref import RN_25D_wMLPref
# For RN50
model_type = 'rn50'
model = RN_25D_wMLPref(backend_model=model_type)
model_path = f'{model_type}_peclr_yt3d-fh_pt_fh_ft.pth'
full_path=os.path.join(BASE_DIR,'data','models',model_path)
checkpoint = torch.load(full_path)
model.load_state_dict(checkpoint['state_dict'])
model.eval()

Obtaining Data loaders

experiment_type = "supervised"
#args.sources : only freihand
#train_param :  = edict(read_json(TRAINING_CONFIG_PATH))  # reading json file. 
data_test = get_data(
            Data_Set, train_param, sources=args.sources, experiment_type=experiment_type,split='test'
        )
test_data_loader, _ = get_train_val_split(
    data_test, batch_size=train_param.batch_size, num_workers=train_param.num_workers
)

I'm running following code:

import import src.experiments.evaluation_utils as eup
output= eup.evaluate(model,test_dataloader,use_procrustes=True)

I changed following functions in order to run the code properly:

camera_param = torch.tensor(self.camera_param[idx_]).float()
joints3D = self.joints.freihand_to_ait(
                torch.tensor(self.labels[idx_]).float()
            )
Seleucia commented 2 years ago

Hello @spurra , did you have time to have a look above code? I would be really appreciate if you can check.

marianpetruk commented 2 years ago

Dear @spurra @dahiyaaneesh @einer @xucong-zhang

I stumbled upon similar problems. Could you please help understand how to properly evaluate and obtain the quantitative results you published?

These model weights achieve the following performance on the FreiHAND leaderboard:

ResNet-50 + PeCLR
Evaluation 3D KP results:
auc=0.357, mean_kp3d_avg=4.71 cm
Evaluation 3D KP ALIGNED results:
auc=0.860, mean_kp3d_avg=0.71 cm

ResNet-152 + PeCLR
Evaluation 3D KP results:
auc=0.360, mean_kp3d_avg=4.56 cm
Evaluation 3D KP ALIGNED results:
auc=0.868, mean_kp3d_avg=0.66 cm

I would greatly appreciate if you could update the repository with evaluation steps to obtain declared metrics.

Thank you and looking forward to your reply.

spurra commented 2 years ago

I apologize for the delay in responding to this. We plan on releasing the code which produces the predictions for codalab this week.

spurra commented 2 years ago

Hi all, thank you for your patience in this matter. It's been an intense week at my internship which is why I only got to this task this weekend. I went over the prediction code and it reproduces the numbers we originally report. It is committed and ready to be pushed to the repo. As the code base is a heavily modified version of FH github code base, I am awaiting permission from the respective authors to upload the prediction code. Once I receive that, I'll push the code.

spurra commented 2 years ago

I have received permission. The code has been pushed. Please let me know if you can reproduce the numbers of the leaderboard.

lyhsieh commented 6 months ago

So, how to reproduce the numbers? Thank you for your help!