Closed wqz960 closed 3 years ago
I'm not sure. Are you use the same code to train the model? This code is what I used to achieve the reported result in the paper. You can try a larger batch size such as 64.
To reproduce this same result, you can use the provided pretrained model.
@haofanwang hi!I just run python train_hopenet.py, the parameters are set as default. OK, I try the larger batch size. Thank you!
@haofanwang, still a gap, where is the bbox(frame_0001_rgb.txt) in the BIWI dataset, can you provide me the bbox file ? Thank you!
I don't work on this problem anymore, so I don't have the dataset on hand. But you can contact the author of https://github.com/natanielruiz/deep-head-pose, it may help.
Thank you for your code! I retrain the model on 300W-LP and evaluate it on AFLW2000, the MAE is 5.71, which is worse than 5.39 in paper, I just change the dataset using 300WLP-multi, can you tell me why? Thank you!