Thanks for sharing the code and also present the paper at KDD2022, which has greatly helped my research. I am extremely interested in this paper and try my best to run the code base on the three few-shot entity typing dataset. I encountered two problems.
the input data requires the "new_hier" and "old_hier", which is not provided in the code base. It seems that "new_hier" provides the hierarchical typing and "old_hier" provides the mapping between entity type classes to the description words. The first word is imporotant and is utilized as the class of the MLM and prompt-tuning word. It is also used in the reverse_dictionary in the loss_f1 evaluation. Can you provide these files for the three datasets ?
Dear author:
Thanks for sharing the code and also present the paper at KDD2022, which has greatly helped my research. I am extremely interested in this paper and try my best to run the code base on the three few-shot entity typing dataset. I encountered two problems.
the input data requires the "new_hier" and "old_hier", which is not provided in the code base. It seems that "new_hier" provides the hierarchical typing and "old_hier" provides the mapping between entity type classes to the description words. The first word is imporotant and is utilized as the class of the MLM and prompt-tuning word. It is also used in the reverse_dictionary in the loss_f1 evaluation. Can you provide these files for the three datasets ?
the paper presents exclusive loss and inclusive loss to consider hierarchical relationship for the inter-class relationships. It seems like the loss is not considered in the current code base. The loss I found is at https://github.com/teapot123/Fine-Grained-Entity-Typing/blob/main/model.py#L171 as CrossEntropy loss on the "entity-type-token" and https://github.com/teapot123/Fine-Grained-Entity-Typing/blob/main/run.py#L77 as "distill-loss". Can you provide relevant code to add the loss ?
Thanks very much for your time on this issue, any feedback and response will be greatly appreciated.