Could you please explain how you handle relations in the code for the Bert.Tokenizer? Is it simply by adding "inverse" in front of each relation? It seems like you don't have a dataset called 'relations2description,' which means that relations are just the strings in the middle column of 'train.txt/valid.txt/test.txt' and haven't undergone any processing. They haven't been converted into descriptions like entities and are used directly, is that correct? Will the Bert model be able to correctly process and output embeddings with rich semantic information, similar to when they are converted into descriptions?
Additionally, how do you handle inverse relations? Is it as simple as adding 'inverse' in front of all relations? Will this approach provide the correct inverse relations? I couldn't find a file like 'inverse_relation2description,' so I need your assistance. Thank you for taking the time to help me with this.
Could you please explain how you handle relations in the code for the Bert.Tokenizer? Is it simply by adding "inverse" in front of each relation? It seems like you don't have a dataset called 'relations2description,' which means that relations are just the strings in the middle column of 'train.txt/valid.txt/test.txt' and haven't undergone any processing. They haven't been converted into descriptions like entities and are used directly, is that correct? Will the Bert model be able to correctly process and output embeddings with rich semantic information, similar to when they are converted into descriptions?
Additionally, how do you handle inverse relations? Is it as simple as adding 'inverse' in front of all relations? Will this approach provide the correct inverse relations? I couldn't find a file like 'inverse_relation2description,' so I need your assistance. Thank you for taking the time to help me with this.