-
It looks like the server which hosts the pre-trained models (https://public.ukp.informatik.tu-darmstadt.de/reimers/sentence-transformers/v0.2/) has been unavailable for a few hours now.
-
I use training_stsbenchmark.py script to perform the STS task,There were no problems during the training, but during dev it was always an error to report out of memory, After careful debugging, I foun…
-
Hi,
Before my question, i'd like to thank you for open sourcing your awesome work to the community.
Context:
I'm working on continue-training on the SentenceTransformer (`'bert-base-nli-mean…
-
I ran the following command:
python examples/evaluation_stsbenchmark.py
And I got the following results:
2019-11-06 09:47:12 - Cosine-Similarity : Pearson: 0.7415 Spearman: *0.7698
20…
-
bert-base-nli-stsb-mean-tokens: First fine-tuned on AllNLI, then on STS benchmark training set. Performance: STSbenchmark: 85.14
It says first fine-tuned on ALLNLI,but in training_stsbenchmark_ber…
-
Hi Team UKP Lab,
I'm looking for ALBERT model to use it with sentence transformer for creating embedding's of dimension 128,I've tried running the script which was provided in the examples for crea…
-
In your paper, you state "The purpose of SBERT sentence embeddings are not to be used for transfer learning for other tasks. Here, we think fine-tuning BERT as described by Devlin et al. (2018) for ne…
-
Hi, I am fine tuning **training_nli_bert.py** script using bert-multilingual model for generating sentence embeddings in Urdu language. I have only NLI dataset available for training and evaluation.
…
-
Hi,
My goal here is to do clustering on sentences. For this purpose, I chose to use similarities between sentence embedding for all my sentences. Unfortunately, camemBERT is not great for that task…
-
The two sentence are NOT concat and input into BERT to get two representation.
And then use cosine loss to train.
Am I right?
I read
![image](https://user-images.githubusercontent.com/4702353/62…