-
Hi,
Congrats on the release!! Is long form synthesis planned?
Thank you!
-
Starting the CoreNLP server is not nice for anyone, it is big, relatively slow and the usage is a bit clunky.
Other options are either spaCy or nltk.
First experiments show that `nltk`'s Named En…
-
Hi, thanks for sharing the code! I am trying to reproduce the results and have some problems.
Below are my steps to train a model on the _DailyDialog_ corpus and evaluate it on the _DialSeg_711_ da…
-
@nreimers hello, For the model 'xlm-r-100langs-bert-base-nli-stsb-mean-tokens' ,how can i get the vocab.txt like the fig.1.I want to convert the sentence to the input_ids by java. But the pre-trained…
-
Hi Maarten,
I'm attempting to execute one of your examples in Google Colab for processing large-scale databases. Here are the specifications of my machine: 8 NVIDIA A100 cards and a 50TB SSD. Howev…
-
Can you please support the instructor models here?
https://github.com/xlang-ai/instructor-embedding
These are arguably the best models for their sizes.
-
Hello,
Thanks for this library, it's pretty nice! I have a project using it that basically exposes sentence embeddings as a stateless HTTP API. In order to make distribution easy and consistent, it…
-
Hey, I am unable to load the model from the huggingface checkpoint. Here is the code and the error:
```py
from DictMatching.moco import MoCo
from utilsWord.test_args import getArgs
from transfor…
-
When using model `LongformerModel` of [transformers](https://huggingface.co/transformers/model_doc/longformer.html#longformermodel), it accepts `attention_mask ` and `global_attention_mask `. But how…
-
I am using BERT to do sentiment classification. I am currently classifying into positive, negative and neutral.
I have some data with emojis and it is always classifying them as neutral. I think I …