Open throw-away-qq opened 2 months ago
@zhengbw0324 Hello, the position encoding of the Transformer-based model (i.e. LightSANs) is used to indicate the order rather than the absolute time, which is the same as the classic model SASRec. In fact, using timestamps requires some pre-processing, because timestamps are continuous values ​​and cannot be directly used for position encoding. You can refer to the TiSASRec (Time Interval Aware Self-Attention for Sequential Recommendation), which involves the application of interaction time intervals.
According to lightSANs paper, there needs to be a mechanism to encode both items and positions (using item embeddings and position embeddings), but in the lightSANs.py file the positional encoding is only learned through a torch.arange style tensor
Shouldn't there be a mechanism to better utilise the positional (timestamp) related features within lightSANs, instead of just passing torch.arange style tensor?
what if positional encoding part of model encodes the timestamp information directly instead of taking in torch.arange for positional encodings to better understand the gaps withing each item for each user?
https://github.com/RUCAIBox/RecBole/blob/2b6e209372a1a666fe7207e6c2a96c7c3d49b427/recbole/model/sequential_recommender/lightsans.py#L85