-
- [ ] [sentence-transformers/README.md at master · liuyukid/sentence-transformers](https://github.com/liuyukid/sentence-transformers/blob/master/README.md?plain=1)
# sentence-transformers/README.md a…
-
Dear author, I want to use bert-base-uncased model to train on NLI dataset based on your method for some research. Could you provide relevant training scripts so that I can better reproduce your exper…
-
Hi! I'm trying to implement the avg word embeddings example and I think I found an error.
In `examples/training_basic_models/training_stsbenchmark_avg_word_embeddings.py` line 44 shouldn't this:
`…
-
I have trained SBERT model from scratch using the code [https://github.com/UKPLab/sentence-transformers/blob/master/examples/training_transformers/training_nli.py](train_nli) and [https://github.com/…
ghost updated
1 month ago
-
Hi!
I'm using sentence-transformers/gtr-t5-base as the base encoder with SimCSE on sentence transformers ([this example](https://github.com/UKPLab/sentence-transformers/blob/master/examples/unsuperv…
-
@izhx Could you kindly share the training scripts for models of varying scales, or alternatively, the complete set of training parameters used for models at different scales?
I saw the following par…
-
Hi,
I found result on STS Benchmark without training sbert -
sbert model include bert layer + pooling layer and I evaluate it using sentEval.
and the same evaluation on USE. Below are the results…
-
I want to use the Class "ParallelSentencesDataset" to load my very big parallel data to fine tune the pre-trained model "LaBSE". But when I used it , it seems that this Class "ParallelSentencesDatase…
-
### Description
Rename existing notebook files to match naming conventions:
- 1 quickstart per scenario. This should be
-
Currently, we use Cosine Similarity for similarity metric. With complex architectures like BERT, it may not be effective as the objective functions used for pre-training or fine-tuning does not direct…