huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
133.24k stars 26.61k forks source link

An example for finetuning FLAVA or any VLP multimodel using trainer (for example for classification) #18066

Open Ngheissari opened 2 years ago

Ngheissari commented 2 years ago

Feature request

There is no example of finetuning any VLP model using trainer. I would appreciate an example

Motivation

The way to use trainers with any Vision and Language pretrained model is not clear.

Your contribution

None.

NielsRogge commented 2 years ago

Hi,

Notebooks for FLAVA will soon be available in https://github.com/NielsRogge/Transformers-Tutorials.

You can find already some tutorials here: https://github.com/apsdehal/flava-tutorials.

cc @apsdehal

github-actions[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

jorgemcgomes commented 2 years ago

The issue was auto marked as closed, but there aren't yet any resources on how to fine-tune FLAVA. Neither of the links posted above by @NielsRogge have instructions on fine-tuning.

I'm posting to also express my interest on this.

daanishaqureshi commented 7 months ago

are there any finetuning tutorials yet in 2024?