IVRL / MulT

(CVPR 2022) MulT: An End-to-End Multitask Learning Transformer
https://ivrl.github.io/MulT
47 stars 4 forks source link

Was the SWIN encoder pre-trained? #5

Closed fbragman closed 7 months ago

fbragman commented 1 year ago

Hi,

For the SWIN encoder in your model, did you use pretrained weights from ImageNet or did you train your network end-to-end from scratch?

Thanks

deblinaml commented 7 months ago

Hi, We used pre-trained weights from ImageNet as in the first Swin transformer model (https://github.com/microsoft/Swin-Transformer/blob/2cb103f2de145ff43bb9f6fc2ae8800c24ad04c6/models/swin_transformer_moe.py#L275). Thanks!