graykode / nlp-tutorial

Natural Language Processing Tutorial for Deep Learning Researchers
https://www.reddit.com/r/MachineLearning/comments/amfinl/project_nlptutoral_repository_who_is_studying/
MIT License
14.03k stars 3.9k forks source link

a question about transformer #55

Open luojq-sysysdcs opened 4 years ago

luojq-sysysdcs commented 4 years ago

class MultiHeadAttention(nn.Module): def init(self): super(MultiHeadAttention, self).init() self.W_Q = nn.Linear(d_model, d_k n_heads) self.W_K = nn.Linear(d_model, d_k n_heads) self.W_V = nn.Linear(d_model, d_v * n_heads) def forward(self, Q, K, V, attn_mask):

q: [batch_size x len_q x d_model], k: [batch_size x len_k x d_model], v: [batch_size x len_k x d_model]

    residual, batch_size = Q, Q.size(0)
    # (B, S, D) -proj-> (B, S, D) -split-> (B, S, H, W) -trans-> (B, H, S, W)
    q_s = self.W_Q(Q).view(batch_size, -1, n_heads, d_k).transpose(1,2)  # q_s: [batch_size x n_heads x len_q x d_k]
    k_s = self.W_K(K).view(batch_size, -1, n_heads, d_k).transpose(1,2)  # k_s: [batch_size x n_heads x len_k x d_k]
    v_s = self.W_V(V).view(batch_size, -1, n_heads, d_v).transpose(1,2)  # v_s: [batch_size x n_heads x len_k x d_v]

    attn_mask = attn_mask.unsqueeze(1).repeat(1, n_heads, 1, 1) # attn_mask : [batch_size x n_heads x len_q x len_k]

    # context: [batch_size x n_heads x len_q x d_v], attn: [batch_size x n_heads x len_q(=len_k) x len_k(=len_q)]
    context, attn = ScaledDotProductAttention()(q_s, k_s, v_s, attn_mask)
    context = context.transpose(1, 2).contiguous().view(batch_size, -1, n_heads * d_v) # context: [batch_size x len_q x n_heads * d_v]
    output = nn.Linear(n_heads * d_v, d_model)(context)
    return nn.LayerNorm(d_model)(output + residual), attn # output: [batch_size x len_q x d_model]

the last second line instantiates a class every time , is it right ? the class should be instantiate in the init function ??

mxsurui commented 2 years ago

The same question,maybe it is not a question,its a problem