Open rabbicat30 opened 1 year ago
You should look at the original Transformer paper and other blog posts (e.g., The Illustrated Transformer is great) for some more information. The reason is because in self-attention we're performing attention on a tensor with itself, hence the square shape.
I know it. Thanks very much!
You should look at the original Transformer paper and other blog posts (e.g., The Illustrated Transformer is great) for some more information. The reason is because in self-attention we're performing attention on a tensor with itself, hence the square shape.
I can't understand why the attention_mask is this shape. Can you give me an answer or some references? I would be very grateful for your help!