DarriusL / CoCheLab

Code for the content caching algorithm in edge caching.
GNU General Public License v3.0
21 stars 4 forks source link

replicating the results #4

Open majidhosseini87 opened 10 months ago

majidhosseini87 commented 10 months ago

Hi, Thank you for your amazing efforts. I've been trying to replicate the results of the paper titled "PSAC: Proactive Sequence-Aware Content Caching via Deep Learning at the Network Edge" using your code. Unfortunately, I am facing challenges in achieving the results described in the paper. Specifically, the results from the Psac_gen framework in your code significantly differ from the Qoe_score mentioned paper. Could you provide any guidance or updates that might assist in accurately replicating the results?

Sincerely,

DarriusL commented 10 months ago

Hi, PSAC_gen in CoCheLab is mainly used for comparison with contrastive learning models (CL4SRec, etc.), so PSAC_gen here is a modified version:

  1. The user request sequence in the original paper was not clipped or filled to a fixed length. In order to unify it with other models, PSAC_gen here is also trained with a fixed-length sequence.
  2. The QoE in the original paper is calculated as follows: image

    , where \theta is the average length of the sequence. Due to the former reason, if the original formula is used, the QoE must be 0 (fixed-length sequence). Therefore, I made some modifications to the original formula:

    image

    , where \theta is the user satisfaction rate and is a hyperparameter.

Farzad-Mehrabi commented 10 months ago

hey, in PSAC Framework for the forward pass there is tensor called su, could you define exactly what it is and what slide_len and L represent? def forward(self, su):

su: (batch_size, slide_len, L)

    #Ec: (batch_size, slide_len, L, d)
    Ec = self.encoder(su);
    #Eu: (batch_size, 1, d)
    Eu = self.encoder(su.reshape(su.shape[0], -1)).mean(dim = 1).unsqueeze(1);
    #o: (batch_size, slide_len, 1, n)
    o = self.VrtConv(Ec.transpose(0,1)).transpose(0,1).transpose(-1, -2);
    #attn: (batch_size, slide_len, 1, L*d)
    attn = self.self_attn(Ec);
    #pro_logits: (batch_size, slide_len, req_set_len)
    pro_logits = self.LSTFcNet(o, attn, Eu);
    return pro_logits
DarriusL commented 10 months ago

hey, in PSAC Framework for the forward pass there is tensor called su, could you define exactly what it is and what slide_len and L represent? def forward(self, su): #su: (batch_size, slide_len, L) #Ec: (batch_size, slide_len, L, d) Ec = self.encoder(su); #Eu: (batch_size, 1, d) Eu = self.encoder(su.reshape(su.shape[0], -1)).mean(dim = 1).unsqueeze(1); #o: (batch_size, slide_len, 1, n) o = self.VrtConv(Ec.transpose(0,1)).transpose(0,1).transpose(-1, -2); #attn: (batch_size, slide_len, 1, L*d) attn = self.self_attn(Ec); #pro_logits: (batch_size, slide_len, req_set_len) pro_logits = self.LSTFcNet(o, attn, Eu); return pro_logits

sure. I usually use su to represent the user sequence, and the input in PSAC_gen ([batch, slide_len, L]) is the original user sequence ([batch, n]) processed by the sliding window (length L)