Open johko opened 2 years ago
@johko have you started implementing it?
@fcakyon yes I have started, but progress is still rather slow, as that is my first model contribution and I have to figure out some stuff.
@johko I totally understand it. Interested in your implementation since I will be using VATT in my research next year :)
Are you working on a TF implementation?
@johko I totally understand it. Interested in your implementation since I will be using VATT in my research next year :)
Are you working on a TF implementation?
Sorry for the late reply (again π). Yes I'm working on a TF implementation. As the original repo is using it, I'm first doing that and then see about pytorch.
@johko, thanks for the response! I may also help with the pytorch part once you finalize the TF implementation π
@fcakyon that would be great, as my expertise is more in TF π
Hey @NielsRogge , I'm sorry but I think I have to stop working this for good. I'd love to finish it, but every time I think now I finally have some time to do it, something else comes around :disappointed:
I think I just can't provide a big contribution like this atm and will rather focus on smaller things. But maybe @fcakyon wants to pick up on it.
Sorry for blocking this so long.
any news about VATT PyTorch implementation ?
Model description
Hey, as discussed with @NielsRogge a few weeks back, I'd like to work on adding the "VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text" model from Google.
It is basically three transformers(Video/Audio/Text) that are trained jointly in an unsupervised manner using contrastive loss functions. For downstreams tasks they fine-tune the Transformers separately, but also explore a version that shares the weights for all modalities.
For Pre-Traning they use text-video-audio triplets from HowTo100M and video-audio pairs from AudioSet. The authors describe how to fine-tune VATT for vision and audio classification tasks and provide weights for the fine-tuned versions.
The backbone for vision is ViT, for audio WaveFormTransformer and for text they are using BERT/T5
Open source status
Provide useful links for the implementation
Paper: https://arxiv.org/pdf/2104.11178.pdf GitHub: https://github.com/google-research/google-research/tree/master/vatt