lucidrains / x-clip

A concise but complete implementation of CLIP with various experimental improvements from recent papers
MIT License
695 stars 47 forks source link

Suggest your favorite papers to add! #1

Open lucidrains opened 2 years ago

lucidrains commented 2 years ago

will start with

  1. FILIP https://arxiv.org/abs/2111.07783
  2. CLOOB https://arxiv.org/abs/2110.11316
  3. https://arxiv.org/abs/2110.05208
Mut1nyJD commented 2 years ago

Florence https://arxiv.org/abs/2111.11432

afiaka87 commented 2 years ago

Would it be possible to explicitly target the same API created by open ai for their CLIP? This way it can be used as a drop-in replacement in e.g. CLIP-guidance notebooks (but anywhere else CLIP is used as well, which is a lot of places).

I think this would basically amount to using the same function signatures for clip.load(), encode_image, encode_text, etc. Not sure how limiting that could be in practice.

lucidrains commented 2 years ago

sure! but in also thinking of extending this to any number of modalities (audio, biosequences, etc)

rom1504 commented 2 years ago

LiT: Zero-Shot Transfer with Locked-image Text Tuning https://arxiv.org/abs/2111.07991 and in particular I think it would be interesting to be able to somehow transfer weights of existing models (clip image and text encoders but also other pretrained encoders) to this implementation somehow, and then continue training do you think there could be some good ways?

RenShuhuai-Andy commented 2 years ago

MURAL: Multimodal, Multitask Retrieval Across Languages: https://arxiv.org/abs/2109.05125

haofanwang commented 2 years ago

Combined Scaling for Zero-shot Transfer Learning

https://arxiv.org/abs/2111.10050

lucidrains commented 2 years ago

LiT: Zero-Shot Transfer with Locked-image Text Tuning https://arxiv.org/abs/2111.07991 and in particular I think it would be interesting to be able to somehow transfer weights of existing models (clip image and text encoders but also other pretrained encoders) to this implementation somehow, and then continue training do you think there could be some good ways?

yup, i think it'll end up something like

clip = CLIP(
    vision_model = vit_transformer,
    text_model = text_transformer,
    ...
)
antofuller commented 2 years ago

CLIP-Lite: Information Efficient Visual Representation Learning from Textual Annotations: https://arxiv.org/pdf/2112.07133.pdf

afiaka87 commented 2 years ago

RegionCLIP: https://arxiv.org/abs/2112.09106v1

They encourage region-level representations by using the released CLIP to both detect objects and to generate region-level captions for objects in a scene which becomes the dataset for finetuning an object detection task. Still reading but I believe it's a Microsoft paper.

batrlatom commented 2 years ago

Hi, I would just to ask if it is possible to make your models scriptable? It looks like lambda functions make it problematic for normal user. Good thing about torchscript is, that it would export to onnx, tensorrt, etc ...

rom1504 commented 2 years ago

https://github.com/facebookresearch/SLIP they combine the losses of CLIP (vision+language) and SimCLR (vision) and get better zero shot accuracy on a 15M dataset than clip on the same dataset Hopefully accuracies would be even better at large scale

rom1504 commented 2 years ago

https://github.com/FreddeFrallan/Multilingual-CLIP works pretty well although they used very little resources Basically they took an existing text model and aligned with the existing clip image

Here's one example showing it works well :

Searching for blue dress in korean

With clip

https://rom1504.github.io/clip-retrieval/?back=https%3A%2F%2Fknn.laion.ai&index=laion_400m_128G&useMclip=false&query=%ED%8C%8C%EB%9E%80+%EB%93%9C%EB%A0%88%EC%8A%A4

With mclip

https://rom1504.github.io/clip-retrieval/?back=https%3A%2F%2Fknn.laion.ai&index=laion_400m_128G&useMclip=true&query=%ED%8C%8C%EB%9E%80+%EB%93%9C%EB%A0%88%EC%8A%A4

(Many other examples can be tried on that ui)

I think we may be able to learn something from their approach

Edit: in practice I believe we already have what we need in the code here : the ability to plug some text encoder

haofanwang commented 2 years ago

https://arxiv.org/abs/2112.09133

Any plan to implement MaskFeat? @lucidrains

lucidrains commented 2 years ago

@haofanwang ohh nope, this doesn't look like it is related to contrastive learning

i could add it to https://github.com/lucidrains/vit-pytorch , but i'd have to understand HOGs better

transformers007 commented 2 years ago

@lucidrains BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation, code

lucidrains commented 2 years ago

@lucidrains BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation, code

this is a great paper :) but it also already came with code!

MicPie commented 2 years ago

Hi @lucidrains ,

I hope you are doing fine? We miss over at the EAI discord!

This could be very interesting for x-clip: “FLAVA - A Foundational Language And Vision Alignment Model”, https://arxiv.org/abs/2112.04482

However, the official code seems to be on the way too: https://github.com/facebookresearch/mmf/issues/1219#issuecomment-1082160255 & https://github.com/facebookresearch/multimodal

All the best, Michael

lucidrains commented 2 years ago

@MicPie hey Michael! miss you too :heart: thanks for the share, i'll give it a read later tonight after i finish some code

MicPie commented 2 years ago

Looks interesting: "CoCa - Contrastive Captioners are Image-Text Foundation Models" https://arxiv.org/abs/2205.01917

“Unlike standard decoder transformers, CoCa omits cross-attention in the first half of the decoder layers to encode unimodal text representations, and cascades the rest of the decoder layers, cross-attending to the image encoder for multimodal image-text representations.”

jwyang commented 2 years ago

Florence https://arxiv.org/abs/2111.11432

Please refer to our UniCL repo on the core algorithm used in Florence: https://github.com/microsoft/UniCL