jiasenlu / vilbert_beta

473 stars 96 forks source link

Would you release the multi-task fine-tuning codes for ViL-BERT? #38

Open yangapku opened 4 years ago

yangapku commented 4 years ago

Hi, I have read your new paper "12-in-1: Multi-Task Vision and Language Representation Learning" on Arxiv, which utilizes multi-task fine-tuning to boost the performance of Vil-BERT. May I ask whether you will release this part of code in this repo or in some other places? Thank you very much!

jiasenlu commented 4 years ago

Hi

Thanks for the interest, yes, We plan to release the code and pretrained model for the new paper (12-in-1). That code will be released under Facebook AI Github, and it's still in the reviewing stage. I think the code and model should be released this month. In the meantime, I'm working on a new open-source multi-modal multi-task transformer (M3Transformer), which is optimized for the new transformer codebase. I will also release this open-source project this month.

yangapku commented 4 years ago

Great! It's delightful to hear this. I will wait for the release.

jiasenlu commented 4 years ago

Check out this release! https://github.com/facebookresearch/vilbert-multi-task

yangapku commented 4 years ago

Thank you for your kind notification! Would you please release the data in this repo as well, like the lmdb files and how to generate features using the new Resnext detector?