-
When calling:
model = SentenceTransformer('distilbert-base-nli-stsb-mean-tokens')
I consistently receive timeout errors.
MaxRetryError: HTTPSConnectionPool(host='sbert.net', port=443): Max re…
-
Problem:
When I run the sample code provided for evaluation, I run for the following error in the file supert.py:
![image](https://user-images.githubusercontent.com/54659709/166067508-32e399d2-3…
-
Hi there,
Thank you for the excellent work and for publishing the code base.
I am attempting to reproduce the retrieval performance of BGE-base as shown in Table 3 but have encountered some issu…
-
I have about 6 million sentences and my embedding vector size is 768 using SBERT.
The problem is that embedding data is too large! (6 million sentences produce about over 200 GB)
I never knew that a…
-
Hey,
it is the second time I encounter low results for specific models. In short, I once trained `deepset/gbert-base` with `train_msmarco_v3_margin_MSE.py` and it worked like a charm. Then I tried …
-
I have installed the resquirements.txt
Upon running the python ingest.py command, I get the following error:
python ingest.py
Loading documents from /home/computer/Downloads/localGPT-main/SOURCE…
-
Hi, this is more of a question / discussion rather than an issue.
I have run a test doing inference with native openai clip (a priori trained only on english texts if I'm not mistaken) and your multi…
-
How to continue the pretraining of Sentence-BERT models using MLM?
Is there any documentation or code snippet for this purpose?
I would like to continue the pretraining of "all-MiniLM-L6-v2" mode…
-
I finetuned some embedding and perform some subtraction above a subset of some sentence’s embedding.
These sentences are similar in the sense of edit distance.
And i hope this will perform some sens…
-
Hello,
I am fine-tuning an biencoder SBERT model on domain specific data for semantic similarity. There is no loss value posted by the `fit ` function from the package. Any idea how to know if the mo…