-
I decided to use `paraphrase-mpnet-base-v2` word embedding for a regression task.
I tried fine-tuning the model via adding a layer on top of it and training it so it should receive a single senten…
-
Hi Nils, thanks for the fantastic work.
Considering an information retrieval system with the two-step approach: **(1)** BM-25; **(2)** Re-ranking, do you have any thoughts on what is the best way t…
-
Hi there!
Now that using adapters works, does it make sense to include it that you can use an adapter for the query / and the sentence2 natively with model.train?
-
Is there any way to get low-dimension sentence embeddings. For example, I want the model to output 50-dim sentence embeddings directly.
Is it possible?
Thanks!
-
Hi everyone!
I'm familiar with SBERT and its pre-trained models and they are amazing! But at the same time, I want to understand how the results are calculated.
For example, I have a document and …
-
Hi,
I would like to implement a feature where given a set of search result, a client can select the most accurate results, and refine the search based off of the selected hits. One idea I had was to …
-
Hi ,
We finetuned Sentence transformers on our domain specific data (similar to NLI data). It is giving high cosine score for irrelevant suggestions . We used good , bad , ok while labeling the da…
-
Hi,
Thanks for publishing and sharing the TSDAE approach.
I am reading through the paper. I have one question.
In Section 7.4 of the [paper](https://arxiv.org/pdf/2104.06979.pdf), it recommend…
-
Hey, first up, thank you for building and open sourcing such a great piece of work, I have been using INSTRUCTOR for some time now and I absolutely love it.
I'm planning on working generating embed…
-
We currently use `models2.json`: https://github.com/simonw/llm-gpt4all/blob/67079c00fa64cba4f163c4579c2c4aab2c91f45a/llm_gpt4all.py#L44-L49
Looks like they introduced `models3.json` two months ago:…