-
Hi, I am trying to see what is the performance of **DocumentRNNEmbeddings** in terms of speed and accuracy. For now let's just focus on **speed**.
I am running following code
```
embedding_map…
-
In [your paper ](https://arxiv.org/abs/1810.04805) (section 5.4, table 7) you indicate that concatenating the last four layers gave the best performance but details are scarce. I am not sure how to se…
-
Hi,
Nice paper and nice code!
I want to try the code out on another dataset, but I'm running into some RAM related issues (i.e. not enough).
Can you provide some information about the system t…
-
In extract_features.py , word embeddings seems to be calculated by summing token, segment and positon embeddings. So the word embedding of "apple" in "Apple is a very good company" and "Apple is a kin…
-
The contextual embeddings (ELMo or BERT) caching mechanism using a lmdb database is really nice especially in training mode because it saves a lot of time after the 1st epoch.
Anyway when you want to…
-
Hi to train with pretrained contextualized char embedding, what all parameters are needed to be passed to ner_tagger.py. I am passing `charlm_shorthand`, `charlm_save_dir`, `charlm`. But it keeps givi…
-
Hi,
I am new to this, what should I do to solve the ConnectionError when run the example usage.
>># load the NER tagger
>>tagger = SequenceTagger.load('ner')
Traceback is as following, what sho…
-
Hi,
Even after trying to work with elmo and reading about it, I am not getting how to use it. It looks like for a given sentence, i have to pass the sentence through the elmo model and then I can g…
-
Hi!
First of all let me begin by thanking you for the pre-trained BERT model which is a life saver. For my latest project, I would like to get to ELMo like contextual embeddings from BioBERT. To re…
-
Hi! I'm trying to use ABSA for clinical notes, and I'm wondering if it would be possible to switch out ELMo for BERT - specifically BERT-Base and BioBERT-finetuned models trained on both all clinical …