-
## Checklist
- [ ] I have verified that the issue exists against the `master` branch of AllenNLP.
- [x] I have read the relevant section in the [contribution guide](https://github.com/alle…
-
Hello,
Could you give me some insight on the ELMo file format?
If I understand correctly I have to generate 3 ELMo files (one for train, one for dev and one for test), where the first column is…
-
Hi, is it possible to load the ELMo module using tensorflow hub and return the full ELMo embeddings? I would like to extract the `"elmo"` embeddings which returns contextual embeddings for each **toke…
-
## ❓ Questions & Help
I'm trying develop Bert passage similarity, specifically question/ answer-retrieval. The architecture is pooling bert contextualized embeddings for passages of text, and t…
-
I tried to train _en_ewt_ with _BERT-Base-Large-Uncased_, _ru_syntagrus_ with _BERT-Base-Multilingual-Uncased_.
Used [this](https://github.com/CoNLL-UD-2018/UDPipe-Future/blob/master/embeddings/bert…
-
There appears to be a miss-match of dimension size for the word embeddings for the pretrained English NER model. The model definition states that the word embedding dimension is 300, but i was under t…
-
We're working on a language generation task where we have relatively little data available and have been using the "Hierarchical Neural Story Generation" command line tools (thanks, they're really gre…
-
HI, i learn about your model, that's a good model.
I get some confuse about the parts "Convolution + maxpool layer for each filter size" ?what's that part used for? what's difference about just co…
-
Hello, I'm curious how I can get embeddings for multi-word expressions. For instance, from `"George Washington is a president."` I want to get the embedding for `"George Washington"`. Since the paper …
-
Based on BERT documentation (https://github.com/google-research/bert#using-bert-to-extract-fixed-feature-vectors-like-elmo) we can extract the contextualized token embeddings of each hidden layer sepa…