zjukg / Structure-CLIP

[Paper][AAAI2024]Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations
https://arxiv.org/abs/2305.06152
114 stars 6 forks source link

关于论文中图1的测试结果 #6

Closed xiaoslzhang closed 8 months ago

xiaoslzhang commented 8 months ago

您好,想问下,论文中图1用clip计算的结果怎么得到的,为什么我用clip计算的结果和你的正相反,我计算的clip分数依次为:[0.63 0.37]、[0.539 0.461],和论文中[0.401,0.599]、[0.289,0.711]相比,得出的结论似乎刚好相反,下面是我的测试代码: import torch import clip from PIL import Image device = "cuda" if torch.cuda.is_available() else "cpu" model, preprocess = clip.load("ViT-B/32", device=device) image = preprocess(Image.open("dress.png")).unsqueeze(0).to(device) text = clip.tokenize(["A blue dress and a red book","A red dress and a blue book"]).to(device)

with torch.no_grad(): image_features = model.encode_image(image) text_features = model.encode_text(text)

logits_per_image, logits_per_text = model(image, text)
probs = logits_per_image.softmax(dim=-1).cpu().numpy()

print("Label probs:", probs)

BigHyf commented 8 months ago

你好,我们使用的是hugging face里面的demo,链接是https://huggingface.co/openai/clip-vit-base-patch32

以下是我们现在做的测试结果:

image image

示意图仅供参考,主要是想来说明一些图文匹配任务存在论文中提到的问题。