QingyaoAi / Deep-Listwise-Context-Model-for-Ranking-Refinement

A Tensorflow implementation of the Deep Listwise Context Model (DLCM) for ranking refinement.
Apache License 2.0
134 stars 56 forks source link

attention loss #1

Closed AdeDZY closed 5 years ago

AdeDZY commented 6 years ago

Hi Qingyao, congratulations on the nice work! I have a question about the attention loss function:

loss = tf.nn.softmax_cross_entropy_with_logits(logits=output, labels=target) It looks different from the formula from the paper. Can you explain a little bit? Thanks!

QingyaoAi commented 6 years ago

Thanks!

Sorry for the confusion. The input (i.e. “labels”) of this softmax function is actually an attention distribution computed with the exponentials of graded relevance labels. It is computed in line 227 in RankLSTM_model.py. I will refactor the code to make it clearer once I have time.

AdeDZY commented 6 years ago

AWESOME! THANKS!

On Tue, Jun 12, 2018, 6:23 PM Qingyao Ai notifications@github.com wrote:

Thanks!

Sorry for the confusion. The input (i.e. “labels”) of this softmax function is actually a attention distribution computed based on the graded relevance labels, which is computed in line 227 in RankLSTM_model.py. I will refactor the code to make it clearer once I have time.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/QingyaoAi/Deep-Listwise-Context-Model-for-Ranking-Refinement/issues/1#issuecomment-396783867, or mute the thread https://github.com/notifications/unsubscribe-auth/AE8DmFd_G9yFw6ap3P2bErxnU3hzxuCdks5t8GmbgaJpZM4UlXkP .