mlfoundations / open_clip

An open source implementation of CLIP.
Other
9.93k stars 959 forks source link

no ‘logit_bias’ for DFN2B-CLIP-ViT-L-14 #736

Closed xiaohu2015 closed 10 months ago

xiaohu2015 commented 11 months ago

https://huggingface.co/apple/DFN2B-CLIP-ViT-L-14

import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer 

model, preprocess = create_model_from_pretrained('hf-hub:apple/DFN2B-CLIP-ViT-L-14')
tokenizer = get_tokenizer('ViT-L-14')

image = Image.open(urlopen(
    'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)

labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)

with torch.no_grad(), torch.cuda.amp.autocast():
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)
    image_features = F.normalize(image_features, dim=-1)
    text_features = F.normalize(text_features, dim=-1)

    text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias) # this line don't work

zipped_list = list(zip(labels_list, [round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
gabrielilharco commented 11 months ago

Hi @xiaohu2015. Note that DFN model's weren't trained with a sigmoid loss (like SigLIP). They also don't have a logit bias,. So for computing the probabilities, you want this

logits = model.logit_scale.exp() * image_features @ text_features.T
probs = logits.softmax(dim=-1)
xiaohu2015 commented 11 months ago

Hi @xiaohu2015. Note that DFN model's weren't trained with a sigmoid loss (like SigLIP). They also don't have a logit bias,. So for computing the probabilities, you want this

logits = model.logit_scale.exp() * image_features @ text_features.T
probs = logits.softmax(dim=-1)

thanks