intfloat / SimKGC

ACL 2022, SimKGC: Simple Contrastive Knowledge Graph Completion with Pre-trained Language Models
188 stars 36 forks source link

Code Problem #10

Closed qiushuigongchang closed 2 years ago

qiushuigongchang commented 2 years ago

hi, i saw the ‘finetune_t’ parameter in model.py (self.log_inv_t = torch.nn.Parameter(torch.tensor(1.0 / args.t).log(), requires_grad=args.finetune_t), in line 31), but i only find 'finetune-t' this parameter, Are the two parameters the same?

intfloat commented 2 years ago

Yes, they are the same.

qiushuigongchang commented 2 years ago

Thanks a lot!

qiushuigongchang commented 2 years ago

hi, I'm a little confused , What does the triplet_mask do in models.py(in line 96)

intfloat commented 2 years ago

It is used to mask out false negatives during training.

For example, given two triples in a batch (obama, instance of, politician), (biden, is, US president), default in batch negatives will treat (obama, instance of, politician) as a positive and (obama, instance of, US president) as a negative. But the latter one is in fact a correct triple. So we need to mask it out for loss computation.

qiushuigongchang commented 2 years ago

Closed-world assumption (CWA) and Open-world assumption (OWA) , is it?

qiushuigongchang commented 2 years ago

hi, I'm a little confused , i understand head + relation -> tail when calculating the loss function, but what mean tail -> head + relation in trainer.py(in line 161), your help was very much appreciated.

intfloat commented 2 years ago

This is simply computing contrastive loss from two directions.

head + relation -> tail says: given a head entity and relation, which tail entity is correct?

tail -> head + relation says: given a tail entity, which head entity + relation is correct?

You can refer to A Simple Framework for Contrastive Learning of Visual Representations for perhaps easier understanding.

qiushuigongchang commented 2 years ago

thank you for helping me