Closed qiushuigongchang closed 2 years ago
Yes, they are the same.
Thanks a lot!
hi, I'm a little confused , What does the triplet_mask do in models.py(in line 96)
It is used to mask out false negatives during training.
For example, given two triples in a batch (obama, instance of, politician), (biden, is, US president)
, default in batch negatives will treat (obama, instance of, politician)
as a positive and (obama, instance of, US president)
as a negative. But the latter one is in fact a correct triple. So we need to mask it out for loss computation.
Closed-world assumption (CWA) and Open-world assumption (OWA) , is it?
hi, I'm a little confused , i understand head + relation -> tail when calculating the loss function, but what mean tail -> head + relation in trainer.py(in line 161), your help was very much appreciated.
This is simply computing contrastive loss from two directions.
head + relation -> tail
says: given a head entity and relation, which tail entity is correct?
tail -> head + relation
says: given a tail entity, which head entity + relation
is correct?
You can refer to A Simple Framework for Contrastive Learning of Visual Representations for perhaps easier understanding.
thank you for helping me
hi, i saw the ‘finetune_t’ parameter in model.py (self.log_inv_t = torch.nn.Parameter(torch.tensor(1.0 / args.t).log(), requires_grad=args.finetune_t), in line 31), but i only find 'finetune-t' this parameter, Are the two parameters the same?