Closed ProQianXiao closed 5 years ago
I'm not sure I understand your question. How is this different from encoding e and r as vectors?
Sorry, maybe I didn't describe my question clearly. My understangding is that (taking encoding relation as an example): you initialize a relation matrice r randomly like this:
and when you encode relation (such as "LocatedIn" in the country dataset), you will find the corresponding id in the "relation_vocab.json" file, assume its id is 2. Then using tf.nn.embedding_lookup(r,2)
will get the third line in the matrice, and that is the encode of the relation "LocatedIn".
I said that "You don't encode them as vectors" means that you didn't encode them using NLP, without considering their semantic information.
Is that right?
If by NLP you mean we didn't use sentences "Boston is located in the US" to encode the vector for LocatedIn, then you're correct. We created relation embeddings from random and trained the embedding parameters using the RL objective. I hope that helps! :)
Thanks very much, that helps.
Hello, I read your paper and codes and there is a confusion.
About encoding relation and entity, here is my understanding. You don't encode them as vectors, instead using embedding matrices r and e and looking up in the embedding matrices according to their ids. And tf.nn.embedding_lookup() function could train the parameters in the embedding matrices.
Is that right?