Closed jej127 closed 4 months ago
Thank you for your interest in our work.
Firstly, let me clarify the concept of an OOV (out-of-vocabulary) word. It refers to a word that falls outside the fixed vocabulary of static embedding models. Since these models have a predetermined set of words after pre-training, any word that is not included in this vocabulary is considered an OOV word, implying that it does not have a pre-trained embedding.
However, contextual embedding models operate differently, they do not possess a fixed word vocabulary. As mentioned in Section 4.3, we consider words that are tokenized into smaller pieces as OOV words and infer reasonable embeddings for them.
Thanks for kind explanation. It is really helpful.
Hello, I have a question regarding how to identify OOV words in the experiments. Specifically, Table 3 in the paper contains columns labeled "OOV", where metrics on OOV words are reported. Could you please clarify how OOV words are defined in these experiments? Additionally, are the definitions consistent between the static embedding model and the contextual embedding model experiments? Thank you for your assistance.