-
**Describe the bug**
The colab stops on first iteration
**To Reproduce**
`embedding_types: List[TokenEmbeddings] = [
#embedding = RoBERTaEmbedding…
-
I know the stanford operation.
```python
tokenizer = RobertaTokenizer.from_pretrained('roberta-large')
model = RobertaModel.from_pretrained('roberta-large')
input_ids = torch.tensor(tokenizer.e…
-
When evaluating conversation models using bert-score, a natural idea is that is is beneficial to add the source sentence before both the hypothesis and the reference to form better contextual embeddin…
-
I am trying to train a NER model using stacked embedding:
```
embedding_types: List[TokenEmbeddings] = [
#embedding = RoBERTaEmbeddings(pretrained_mo…
-
The [sentence transformers](https://github.com/UKPLab/sentence-transformers) library has great pre-trained models to produce embeddings for entire sentences.
We should add a new `DocumentEmbedding…
-
Hi,
Even after trying to work with elmo and reading about it, I am not getting how to use it. It looks like for a given sentence, i have to pass the sentence through the elmo model and then I can g…
-
**Working** example:
`print(geo.geoparse(
"""
Wuppertal remote-option
"""
))`
Result:
`[{'word': 'Wuppertal', 'spans': [{'start': 1, 'end': 10}], 'country_predicted': 'DEU', 'country_conf':…
-
The flag `SequenceTagger.relearn_embeddings` is always set to `true` and is used to add a `Linear` layer called `embedding2nn`:
https://github.com/flairNLP/flair/blob/4ce32c774b4dc5a8bfc0559441a1c8da…
-
**READER BEWARE**: this is a rabbithole and perhaps the most exciting part about knowledge graphs. Continue at your own risk.
I'm skeptical any of these would be quite good enough out of the box fo…
-
Hi,
In the example notebook for Contextual Topic Modeling, we can get topic distribution for a document via
`distribution = ctm.get_thetas(training_dataset)[8] # topic distribution for the first do…