Closed MFajcik closed 6 years ago
Currently, the way to do this is to implement a TokenIndexer
and a TokenEmbedder
. This is how we handle ELMo (indexer and embedder). @nelson-liu has also been working on a Contextualizer
abstraction, which might be simpler than doing what we did for ELMo, but that hasn't been merged into the main repo.
Assume that I have my own model (in pytorch), that can produce word embeddings (Contextualized, for simplicity, assume that I have function that takes a sentence and returns list of embeddings). How to evaluate my context-specific (Elmo-LIKE) embeddings in available models? Please provide some example, for instance on bidaf model for Question answering.