-
Thanks for the great repo. I am just wondering if it is possible to control the maximum sequence length when using document or sentence embeddings. I did not find it in the docs.
Thanks
ghost updated
4 years ago
-
i am wondering whether you could be interested in having the FinBERT model(s) hosted as part of the repository for word embeddings maintained by the NLPL consortium? among other things, this would pr…
oepen updated
4 years ago
-
## Checklist
- [x] I have verified that the issue exists against the `master` branch of AllenNLP.
- [x] I have read the relevant section in the [contribution guide](https://github.com/alle…
-
## ❓ Questions and Help
**Description**
Hi, we can use glove embedding when building vocab, using
something like:
```
MIN_FREQ = 2
TEXT.build_vocab(train_data,
min_fre…
antgr updated
4 years ago
-
In the encoder-decoder architecture, encoder output is passed to decoder as keys to be used in attention. Here (https://github.com/lucidrains/reformer-pytorch/blob/5f5bbf4fd5806f45d2cb3b7373021786b3b3…
-
I was trying to look for the references where the sliding window is implemented to process long sequences. How do we split a long sequence and then after getting the embeddings, how do we unpack them?…
-
Hi,
### (1) Update guide to support newest version
I'm going through the "Next Steps" chapter > section "Switching to pre-trained contextualizers".
First, the config file showed in the guide uses :…
-
I have a task where I want to obtain better word embeddings for food ingredients. Since I am a bit new to the field of NLP, I have certain fundamental doubts as well which I would love to be corrected…
-
I have a masked LM pretrained with bert.
The embeddings are poor on the sentence level, but do well for base tokens.
There is a natural tree structure to my corpus that I believe stands to gain …
-
Does allennlp contain any pre-train character embedding to use?Or empty one?