Elody-07 / AWR-Adaptive-Weighting-Regression

Code for paper <AWR: Adaptive Weighting Regression for 3D Hand Pose Estimation>. Accepted by AAAI 2020.
MIT License
125 stars 21 forks source link

Pre-trained Model #16

Open Xadra-T opened 2 years ago

Xadra-T commented 2 years ago

Hi there, When I tried to test the pre-trained model, it said:

loading model from ./results/hourglass_1.pth
{'epoch': 14, 'MPE': 7.700112, 'AUC': 0.8504827899520097}

and the result was:

[epoch -1][MPE 25.175][AUC 0.530]

I tried two times with Google Colab. One time I installed the requirements and the other time I did not. Both gave the same result. Any help is appreciated.

(Btw, the hourglass_1 results (hourglass_1.txt) gives the expected error value (7.7)).

Elody-07 commented 2 years ago

Hi,

Sorry for the delay. We didn't specify in the paper, but the hourglass model is trained with kernel_size=0.4 while resnet is trained with kernel_size=0.8. You may try out different settings. We have updated and debugged our codes. You may get the expected result by running train.py now.

Xadra-T commented 2 years ago

Hi, Thanks for the updates. I'm getting memory error with the new test function. Could you please look into it? (running without the train part)

I trained the model with the new train.py. The only differences that I know of are 1) 4 workers instead of 8 2) torch version is 1.10.0+cu113

Using the old test function, still the error is too high (1-stage HG 10.2 vs 7.7). Could it be because of these two?

In the line 42 of the config, it says: kernel_size = 0.4 # 0.4 for hourglass and 1 for resnet perhaps the 1 should change to 0.8

Also in the line 160 (and 149) of hourglass.py there is this line: combined_feature.append(feature) but combined_feature is not used anywhere. Was it for experiments?

Elody-07 commented 2 years ago

Hi, Sorry for the delay, I have missed the notifications of github. If you have trouble with memory during inference, you can try decrease batch_size or num_workers. This should not affect the network's performance.

combined_feature is for experiments.

If you find errors in our code, feel free to pull a request.

Wangyi1121 commented 3 months ago

Hi, Thanks for the updates. I'm getting memory error with the new test function. Could you please look into it? (running without the train part)

I trained the model with the new train.py. The only differences that I know of are

  1. 4 workers instead of 8
  2. torch version is 1.10.0+cu113

Using the old test function, still the error is too high (1-stage HG 10.2 vs 7.7). Could it be because of these two?

In the line 42 of the config, it says: kernel_size = 0.4 # 0.4 for hourglass and 1 for resnet perhaps the 1 should change to 0.8

Also in the line 160 (and 149) of hourglass.py there is this line: combined_feature.append(feature) but combined_feature is not used anywhere. Was it for experiments?

I have encountered the same issue. When I trained and tested with the original code myself, the error was around 10.2, which is significantly higher than the 7.7 error reported in the paper. This gap could be due to a variety of reasons, and I am unable to reproduce the 7.7 effect.