Live Demo · Pre-trained Models · Report Bug
OpenAI recently released the paper Learning Transferable Visual Models From Natural Language Supervision in which they present the CLIP (Contrastive Language–Image Pre-training) model. This model is trained to connect text and images, by matching their corresponding vector representations using a contrastive learning objective. CLIP consists of two separate models, a visual encoder and a text encoder. These were trained on a wooping 400 Million images and corresponding captions. OpenAI has since released a set of their smaller CLIP models, which can be found on the official CLIP Github.
A live demonstration of multilingual Text-Image retrieval using M-CLIP can be found here! This demo was created by Rom1504, and it allows you to search the LAION-400M dataset in various languages using M-CLIP.
While it is possible that other versions works equally fine, we have worked with the following:
pip install multilingual-clip torch
You can also choose to pip install tensorflow
instead of torch.
Inference code for Tensorflow is also available in inference_example.py
from multilingual_clip import pt_multilingual_clip
import transformers
texts = [
'Three blind horses listening to Mozart.',
'Älgen är skogens konung!',
'Wie leben Eisbären in der Antarktis?',
'Вы знали, что все белые медведи левши?'
]
model_name = 'M-CLIP/XLM-Roberta-Large-Vit-L-14'
# Load Model & Tokenizer
model = pt_multilingual_clip.MultilingualCLIP.from_pretrained(model_name)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
embeddings = model.forward(texts, tokenizer)
print(embeddings.shape)
Setup a virtualenv:
python3 -m venv .env
source .env/bin/activate
pip install -e .
Every text encoder is a Huggingface available transformer, with an additional linear layer on top. For more information of a specific model, click the Model Name to see its model card.
Name | Model Base | Vision Model | Vision Dimensions | Pre-trained Languages | #Parameters |
---|---|---|---|---|---|
LABSE Vit-L/14 | LaBSE | OpenAI ViT-L/14 | 768 | 109 Languages | 110 M |
XLM-R Large Vit-B/32 | XLM-Roberta-Large | OpenAI ViT-B/32 | 512 | 100 Languages | 344 M |
XLM-R Large Vit-L/14 | XLM-Roberta-Large | OpenAI ViT-L/14 | 768 | 100 Languages | 344 M |
XLM-R Large Vit-B/16+ | XLM-Roberta-Large | Open CLIP ViT-B-16-plus-240 | 640 | 100 Languages | 344 M |
Following is a table of the Txt2Img @10-Recal for the humanly tanslated MS-COCO testset.
Name | En | De | Es | Fr | Zh | It | Pl | Ko | Ru | Tr | Jp |
---|---|---|---|---|---|---|---|---|---|---|---|
OpenAI CLIP Vit-B/32 | 90.3 | - | - | - | - | - | - | - | - | - | - |
OpenAI CLIP Vit-L/14 | 91.8 | - | - | - | - | - | - | - | - | - | - |
OpenCLIP ViT-B-16+- | 94.3 | - | - | - | - | - | - | - | - | - | - |
LABSE Vit-L/14 | 91.6 | 89.6 | 89.5 | 89.9 | 88.9 | 90.1 | 89.8 | 80.8 | 85.5 | 89.8 | 73.9 |
XLM-R Large Vit-B/32 | 91.8 | 88.7 | 89.1 | 89.4 | 89.3 | 89.8 | 91.4 | 82.1 | 86.1 | 88.8 | 81.0 |
XLM-R Vit-L/14 | 92.4 | 90.6 | 91.0 | 90.0 | 89.7 | 91.1 | 91.3 | 85.2 | 85.8 | 90.3 | 81.9 |
XLM-R Large Vit-B/16+ | 95.0 | 93.0 | 93.6 | 93.1 | 94.0 | 93.1 | 94.4 | 89.0 | 90.0 | 93.0 | 84.2 |
The training curves for these models are available at this Weights and Biases Report, the results for other non-succesfull and ongoing experiments can be found in the Weights and Biases Project.
Older versions of M-CLIP had the linear weights stored separately from Huggingface. Whilst the new models have them directly incorporated in the Huggingface repository. More information about these older models can be found in this section.
This folder contains the code used for training the above models. If you wsh to train your own model you must do the following things:
This Google Drive folder contains both pre-computed CLIP-Text Embeddings for a large porton of the the image captions of GCC + MSCOCO + VizWiz.
The Google Drive folder also contains the translation data used to train the currently available models. Good Luck
If you have trained a CLIP Text encoder specific to your language, or another model covering a language not supported here, Please feel free to contact us and we will either upload your model and credit you, or simply link to your already uploaded model.
If you have questions regarding the code or otherwise related to this Github page, please open an issue.
For other purposes, feel free to contact me directly at: Fredrik.Carlsson@ri.se
Distributed under the MIT License. See LICENSE
for more information.
If you found this repository useful, please consider citing:
@InProceedings{carlsson-EtAl:2022:LREC,
author = {Carlsson, Fredrik and Eisen, Philipp and Rekathati, Faton and Sahlgren, Magnus},
title = {Cross-lingual and Multilingual CLIP},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {6848--6854},
abstract = {The long-standing endeavor of relating the textual and the visual domain recently underwent a pivotal breakthrough, as OpenAI released CLIP. This model distinguishes how well an English text corresponds with a given image with unprecedented accuracy. Trained via a contrastive learning objective over a huge dataset of 400M of images and captions, it is a work that is not easily replicated, especially for low resource languages. Capitalizing on the modularization of the CLIP architecture, we propose to use cross-lingual teacher learning to re-train the textual encoder for various non-English languages. Our method requires no image data and relies entirely on machine translation which removes the need for data in the target language. We find that our method can efficiently train a new textual encoder with relatively low computational cost, whilst still outperforming previous baselines on multilingual image-text retrieval.},
url = {https://aclanthology.org/2022.lrec-1.739}
}