-
I thought the feature vectors extracted from BERT represents word embeddings.
So I thought, in order to use these embeddings, one just have to extract it (using `extract_features.py`), then load th…
-
Hi,
Thanks for your work.
I want to know the maximum GPU memory consumption when training all wikipedia entities. I have tried using single Tesla P100 (16G) and 4 x Tesla M60 (4 x 8 = 32G). They b…
-
When training an NER sequence tagger with `WordEmbeddings('de-fasttext')` I get a torch serialization error, right after the first epoch.
Code:
```
from flair.data_fetcher import NLPTask
from …
mhham updated
6 years ago
-
Assume that I have my own model (in pytorch), that can produce word embeddings (Contextualized, for simplicity, assume that I have function that takes a sentence and returns list of embeddings).
How…
-
Hi,
Instead of Glove and FastText word embeddings I am using ELMo, where the ELMo generates the embedding of size either 512 or 1024. I have made few modifications in the load_embeddings function i…
-
Thanks a lot for this work and making it available!
I used ELMo contextualized embeddings in my Keras framework ([DeLFT](https://github.com/kermitt2/delft)) and I could reproduce the excellent res…
-
Hi Loic,
I've been working on segmenting our own colorized point cloud data using your superpoint graph approach. Since annotating some of our own data will be time consuming (we will eventually do…
-
I have Tensor T = [B, T, D] of contextual word embeddings for a given piece of text. Alongside, I have a tensor S = [B, M, 2] of M-spans in this text and their representations, i.e. a tensor R = [B, …
-
Hello Alan,
Thanks for your good research publication "Contextual String Embeddings for Sequence Labeling", I tried to use your pre-trained model in my research.
Currently, I found your language…
-
Because of the small size the training set of Conll-2003, some authors incorporated the development set as a part of training data after tuning the hyper-parameters. Consequently, not all results are …