-
I am currently working on semantic similarity for comparing business descriptions. To this end, I'm using sentence transformers to vectorize the texts and cosine similarity as a comparison metric. How…
-
First, thank you so much for sentence-transformer.
How to get embedding vector when input is tokenized already?
i guess sentence-transformer can `.encode(original text)`.
But i want …
sogm1 updated
6 months ago
-
Hi all,
I have successfully trained a model (`trainMode=2` if it matters). I would now like to infer the similarity between a collection of sentences and a new sentence -- as per the original paper…
-
### A note for the community
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to …
fungs updated
3 months ago
-
Hi, thanks for your code! I have some questions about the model.
When we construct the prototype matrix(N_l x N_p x D), the 1xD vectors in it is derived from the whole image/sentence;
However, when…
-
Thanks for sharing the excellent source code. I am confused about the vector half function:
```
def downsample_vectors(vecs1):
a, b, c = vecs1.shape
half = np.empty((a, b // 2, c), dtype…
-
**To generate new questions this line is used where config-trans specifies the input text:**
$> `th translate.lua -model model/ -config config-trans`
**1) What is the requirement of the input t…
-
I am using flagbedding well.
Among them, I found something I don't understand.
If the same sentence is inferred from different GPUs on different servers, the value of the embedding vector is differe…
-
**Function to edit in main.py for this issue:**
def vectorize(sentence)
Task:
- Convert the preprocessed text data into appropriate vector form.
The text data needs to be converted to vect…
-
Hi, my query is related to combining sentence embeddings and some external metrics. For the task of neural information retrieval, more specifically re-ranking, I have a few metrics such as page …