openai / CLIP

CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
MIT License
26.19k stars 3.35k forks source link

Question on the example: #380

Closed shersoni610 closed 1 year ago

shersoni610 commented 1 year ago

Hello,

I see the following example on the page:

Qs: (1) we are calculate image and text features, but they are not being used in the code. (2) Are the arguments to the model (image,text) or (image_feature, text_features)

Thanks

mport torch import clip from PIL import Image

device = "cuda" if torch.cuda.is_available() else "cpu" model, preprocess = clip.load("ViT-B/32", device=device)

image = preprocess(Image.open("CLIP.png")).unsqueeze(0).to(device) text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device)

with torch.no_grad(): image_features = model.encode_image(image) text_features = model.encode_text(text)

logits_per_image, logits_per_text = model(image, text)
probs = logits_per_image.softmax(dim=-1).cpu().numpy()

print("Label probs:", probs) # prints: [[0.9927937 0.00421068 0.00299572]]

jongwook commented 1 year ago

Those are intentional. Please see: