chaos-moon / paper_daily

One paper a day, keep laziness away.
MIT License
7 stars 3 forks source link

CLIP系列 #18

Open zc12345 opened 1 year ago

zc12345 commented 1 year ago

CLIP

background

以前方法的缺点

相关方法

伪代码

# image_encoder - ResNet or Vision Transformer 
# text_encoder - CBOW or Text Transformer 
# I[n, h, w, c] - minibatch of aligned images 
# T[n, l] - minibatch of aligned texts 
# W_i[d_i, d_e] - learned proj of image to embed 
# W_t[d_t, d_e] - learned proj of text to embed 
# t - learned temperature parameter 

# extract feature representations of each modality 
I_f = image_encoder(I) #[n, d_i] 
T_f = text_encoder(T) #[n, d_t] 

# joint multimodal embedding [n, d_e] 
I_e = l2_normalize(np.dot(I_f, W_i), axis=1) 
T_e = l2_normalize(np.dot(T_f, W_t), axis=1) 

# scaled pairwise cosine similarities [n, n] 
logits = np.dot(I_e, T_e.T) * np.exp(t) # symmetric loss function 
labels = np.arange(n) 
loss_i = cross_entropy_loss(logits, labels, axis=0) 
loss_t = cross_entropy_loss(logits, labels, axis=1) 
loss = (loss_i + loss_t)/2

模型选择上的insight

[^ref_zhihu]: 【CLIP系列Paper解读】CLIP: Learning Transferable Visual Models From Natural Language Supervision

实现

limitations

思考

zc12345 commented 1 year ago

Unifying image-caption and image-classification datasets with prefix conditioning

utils

TL;DR

zc12345 commented 1 year ago

Unified Contrastive Learning in Image-Text-Label Space

TL;DR

image