gaopengcuhk / Stable-Pix2Seq

A full-fledged version of Pix2Seq
Apache License 2.0
235 stars 20 forks source link

details of transformer code #9

Open lumiaomiao opened 2 years ago

lumiaomiao commented 2 years ago

Thank you for your work, I have a question about sequence embedding. The screenshot is from transformer.py When you get sequence embedding, the position embedding has already been added to sequence embedding as fllows: image Why do you input the same position embedding into decoder layer ? After this operation, position embedding is added to sequence embedding twice. image