I noticed that the implement of ComplEx and other methods are little bit different from original papers.
For example, you use BatchNorm and dropout in the code, does the original algorithm also use BatchNorm and dropout?
head = self.bn0(head)
head = self.ent_dropout(head)
relation = self.rel_dropout(relation)
Yes, we used the code made available by the TuckER authors. However, this isn't necessary, and you can also use libKGE to train embeddings. Just make sure to not use batch norm in that case.
I noticed that the implement of ComplEx and other methods are little bit different from original papers. For example, you use BatchNorm and dropout in the code, does the original algorithm also use BatchNorm and dropout?
head = self.bn0(head) head = self.ent_dropout(head) relation = self.rel_dropout(relation)