-
Hello. I am upgrading the scripts to use the 3.11 version of the package from 2.7 and would like to get a bit more clarity on the dataset/loss documentation. For context, the training pipeline uses di…
-
in dataset_load.py, line 376 "tokenizer_name = '/home/duhw/pretrain_model/sbert/'"
-
When using `SetFit` for classification in a more technical domain, I could imagine the generically-trained `SBERT` models may produce poor sentence embeddings if the domain is not represented well eno…
-
when i run main.py facing this issue
Traceback (most recent call last):
File "/home/sysadm/Documents/GNN-RAG-main/gnn/models/ReaRev/main.py", line 50, in
main()
File "/home/sysadm/Documen…
-
# Performance Comparison
- [Ref](https://benchmark.vectorview.ai/vectordbs.html)
Feature | Pinecone | Weaviate | Milvus | Qdrant | Chroma | Elasticsearch | PGvector
-- | -- | -- | -- | -- | -- …
-
I’m using one of the hugging face models: sentence-transformers/all-MiniLM-L6-v2 for semantic search. Currently I'm facing trouble while searching for exact keywords. This is basically required when s…
-
Create a retriever based on a sentence bert, passing a value, eg. 10, to k param.
It is not taken into account when calling the retriever (more values are returned)
```
retriever = retrieve.Enc…
-
Hi Sbert Members,
First of all, I want to thank you personally for your awesome product! I want to know what is the best way to enrich SBERT with metadata (categorical or numeric)? Does sbert suppo…
-
Hi,
I'm looking into creating a torchscript executable of SBert for getting embedding of sentences similar to the one described [here](https://huggingface.co/transformers/torchscript.html).
Can …
-
Hey,
it seems straight forward possible that I can use a some of the pretrained models (SBERT) with the cross-encoders? It seems that they all have a BertForSequenceClassificatoin model available w…