Closed archatelain closed 1 month ago
Hello, that's a good question. The models we release are trained on multiple EE datasets. When training on different datasets, we add a prefix to represent the schema of the data. For example, we use "
Thank you for your answer.
I'm still unclear though on these prefixes. It seems like you did not add them as special tokens in the tokenizer. Is it that you considered that treating them like any other word was not a problem or am I missing something?
For instance in the following google colab notebook, the author does add their prefix "<idf.lang>"
as a special token to the tokenizer: https://colab.research.google.com/github/KrishnanJothi/MT5_Language_identification_NLP/blob/main/MT5_fine-tuning.ipynb
Thanks for the question. We trained two versions of the model: with prefixes added as special tokens or without. There is no significant difference between the results of these two. Previous work has also revealed the similar phenomenon (https://aclanthology.org/2022.aacl-short.21.pdf).
Hello,
Thank you for this great package!
I would like to know on which datasets and how the two models that are used when running
OmniEvent.infer
were fine-tuned. That is, the 2 models which links are accessibles in theutils
module.In particular, I did notice that there is an option "schema" in
OmniEvent.infer
. I took it as suggesting that the models where fine-tuned all on the schemas available. Yet, when digging a bit further I noticed that none of these schemas have been passed as special_tokens to the tokenizer. Thus I'm wondering how the model would know that we are refering to a specific task, that is the fine-tuning on a specific dataset, when prepending each text withf"<txt_schema>"
. To be sure, when given "\<maven>The king married the queen" how does the model understand that I want it to focus on what it learned when being fine-tuned on the maven dataset?I ran a test only with the EDProcessor class using the schema "maven" and indeed it treated it as any other token.
Thank you