We compute word embeddings for English words, fra/tpi/meu words, and semantic domains (by averaging the English words) to link the fra/tpi/meu words to the semantic domains.
(This issue is not part of the proposal. --> optional)
Motivation: less FP, less FN --> more precision and recall --> F1 > 0.30
Tasks
[x] clarify the goal
[ ] Does the aligner (e.g., a fine-tuned AWESoME model) already use word embeddings?
[ ] clarify how this is different from node embeddings if words are nodes
Goal
We compute word embeddings for English words, fra/tpi/meu words, and semantic domains (by averaging the English words) to link the fra/tpi/meu words to the semantic domains. (This issue is not part of the proposal. --> optional) Motivation: less FP, less FN --> more precision and recall --> F1 > 0.30
Tasks