Based on the SINE paper, in the attention weighting part, they added the trainable positional embeddings to the input embeddings so that the model can use item position to calculate Pt|k. But in RecBole SINE attention weighting implementation, I saw that you still use the original input for calculation.
Description:
Based on the SINE paper, in the attention weighting part, they added the trainable positional embeddings to the input embeddings so that the model can use item position to calculate
Pt|k
. But in RecBole SINE attention weighting implementation, I saw that you still use the original input for calculation.Here is the detail in the paper for reference: