-
Hi,
I have run the code run_pretraining.py script on my domain specific data.
It seems like only checkpoints are saved. I have got two files 0000020.params and 0000020.states.
How can I save …
-
Sorry to bother you, but i really want to ask when encoding the words , why you use the nn.Embedding rather than use a pre-trained Embedding such as Glove? Hope you can help me with this question. Tha…
-
**System (please complete the following information):**
- OS: Linux Ubuntu 18
- Python version: 3.6.5
- AllenNLP version: 0.8.3
- PyTorch version: 1.1.0
**Question**
I'm trying to use the …
-
For example, in the 'D. Contextualized Word Embeddings' session, you wrote 'In Eq. 8, s_{j}^{task} ...'. Although we could figure it out from the context. :)
-
Hi FLAIR team and users,
Given a document, I would like to compute the contextualized word embedding for arbitrary words of the document.
How can I do this with FLAIR? I successfully trained langu…
-
-
HI,
I am a little bit worried about the capacity of Bi-LSTM. As it is shown at Table 4, the maximum sequence length is 1,664. Is that mean your pre-trained LSTM model need to load all 1663 amino ac…
-
I'm looking to use BERT to create contextual embeddings of words in my documents. This is similar to ELMo as noted [in the README](https://github.com/google-research/bert#using-bert-to-extract-fixed-f…
-
on the examples, it loads bert-base model and do some tasks, the paper says that it will fix the parameters of bert and only update the parameters of our tasks, but i find that it seems not fix param…
-
Hello,
First of all I would I like to thank you for this awesome explanation of seq2seq model.
As I am very new to pytorch this detailed explanation is very much helpful.
I am trying to implemen…