Neph0s / LMKE

Code for the paper 'Language Models as Knowledge Embeddings'
53 stars 7 forks source link

Hello, I suspect that in your code, During training and prediction, the degrees of predicted tail entities are leaked. #3

Closed LiOHx closed 1 year ago

LiOHx commented 1 year ago

From the code below, when matching, the degree of the matching entity should be entered, but the predicted entity degree is entered. sim[it] = self.sim_classifier(torch.cat([target_pred, target_encoded, target_pred - target_encoded, target_pred * target_encoded, deg_feature], dim=-1)).T print(deg_feature) tensor([[3.2581, 4.0943], [3.2581, 4.0943], [3.2581, 4.0943], ..., [3.2581, 4.0943], [3.2581, 4.0943], [3.2581, 4.0943]], device='cuda:1') I don't know if my understanding is correct, please correct me, thank you.