THU-KEG / OmniEvent

A comprehensive, unified and modular event extraction toolkit.
https://omnievent.readthedocs.io/
MIT License
352 stars 33 forks source link

Information on models fine-tuning used in OmniEvent.infer #50

Closed archatelain closed 1 month ago

archatelain commented 10 months ago

Hello,

Thank you for this great package!

I would like to know on which datasets and how the two models that are used when running OmniEvent.infer were fine-tuned. That is, the 2 models which links are accessibles in the utils module.

In particular, I did notice that there is an option "schema" in OmniEvent.infer. I took it as suggesting that the models where fine-tuned all on the schemas available. Yet, when digging a bit further I noticed that none of these schemas have been passed as special_tokens to the tokenizer. Thus I'm wondering how the model would know that we are refering to a specific task, that is the fine-tuning on a specific dataset, when prepending each text with f"<txt_schema>". To be sure, when given "\<maven>The king married the queen" how does the model understand that I want it to focus on what it learned when being fine-tuned on the maven dataset?

I ran a test only with the EDProcessor class using the schema "maven" and indeed it treated it as any other token.

Thank you

h-peng17 commented 9 months ago

Hello, that's a good question. The models we release are trained on multiple EE datasets. When training on different datasets, we add a prefix to represent the schema of the data. For example, we use "" to represent the schema of the Maven dataset. However, due to limitations in data volume and model capacity, the models we release sometimes struggle to follow human instructions (i.e., the schema prefix). We are currently researching how to align the model better for IE tasks to make it more adept at following human instructions.

archatelain commented 9 months ago

Thank you for your answer.

I'm still unclear though on these prefixes. It seems like you did not add them as special tokens in the tokenizer. Is it that you considered that treating them like any other word was not a problem or am I missing something?

For instance in the following google colab notebook, the author does add their prefix "<idf.lang>" as a special token to the tokenizer: https://colab.research.google.com/github/KrishnanJothi/MT5_Language_identification_NLP/blob/main/MT5_fine-tuning.ipynb

h-peng17 commented 8 months ago

Thanks for the question. We trained two versions of the model: with prefixes added as special tokens or without. There is no significant difference between the results of these two. Previous work has also revealed the similar phenomenon (https://aclanthology.org/2022.aacl-short.21.pdf).