-
I've been looking for up-to-date information about how various pre-trained models fare for sentence similarity and clustering tasks (e.g. with [BERTopic](https://github.com/MaartenGr/BERTopic)), rathe…
-
**Describe the bug**
E5-small/base/large v1/v2
Hi unilm team,
Thank you so much for the great project! We try to replace sentence-transformer with E5. When using the official example scripts wi…
-
**Paper**
Contextual Augmentation: Data Augmentation by Words with Paradigmatic Relations
**Introduction**
This article is part of a series of efforts that have used language models for data augm…
-
Hi,
I am trying to obtain the semantic similarity between the generated and the ground truth sentence.
I used all these metrics to evaluate the generated sentences (validation dataset):
BLEU 1…
-
When I calculate the sentence similarity score using sbert and then train my own language model. I use different sentence combination methods.
Given two list of sentence, list_1 = [s1, s2, s3], lis…
-
hey man,
This is my first contact with sentence embedding. I have some doubts.
The multilingualism here refers to the semantic similarity between Chinese sentences and Chinese sentences, and the sim…
-
The model's output is a torch.cuda.FloatTensor. How can I get real score between 2 sentences?
-
This task involves automating the current 'precheck' stage which currently involves a human 'triage-er' to validate whether the student model already knows the information which a user is trying to te…
-
In _Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks_ Section 4:Evaluation - Semantic Textual Similarity, you include the Spearman rank correlation between the cosine similarity of the e…
-
Hello, I just want to say a big thank you for sentence-transformer.
I have a question for backward_loss in MultipleNegativesSymmetricRankingLoss
according to code
```
import torch
from …
sogm1 updated
9 months ago