SimonVandenhende / Multi-Task-Learning-PyTorch

PyTorch implementation of multi-task learning architectures, incl. MTI-Net (ECCV2020).
Other
752 stars 114 forks source link

About the pre-trained model and data augmentation #10

Closed SicongLiang closed 3 years ago

SicongLiang commented 3 years ago

Thank you for your sharing about multi-task learning problem which is very useful!

You have mentioned you used the pre-trained models, e.g., pre-trained Resnet-50, and data augmentation trick in Sec 4.1.4 Training Setup. Therefore, is it right that the results in NYUD-v2 dataset which are reported in TABLE 5(c) in your paper are also based pre-trained Resnet-50 model with data augmentation? Besides, dose using the pre-trained models and data augmentation improve the performance very much?

Thanks for your patience again!

SimonVandenhende commented 3 years ago

I always used a pre-trained ResNet-50 model with data augmentation. However, I did not retrain the PAD-Net, JTLR and PAP-Net model. It should be possible to get similar numbers for the other models though using the provided codebase.