thunlp / PEVL

Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”
MIT License
47 stars 5 forks source link

what is the difference of the second pre-trained model for different tasks? #13

Open ZHUXUHAN opened 1 year ago

ZHUXUHAN commented 1 year ago

i see there are different pre-trained model for each downstream tasks, what is the difference?