It has been reported in the literature that in case of common-sense Knowledge Graphs, initializing entity embeddings as averaged word embeddings leads to faster convergence and better results. Have you tried this and provide functionality to use this? I can implement it for my own use-case, however wanted to know if your work already handles this.
Thanks for making the work public!
It has been reported in the literature that in case of common-sense Knowledge Graphs, initializing entity embeddings as averaged word embeddings leads to faster convergence and better results. Have you tried this and provide functionality to use this? I can implement it for my own use-case, however wanted to know if your work already handles this.