-
Hi!
If I run sentence-transformers without pre-training, is it equivalent to apply mean-pooling to the last layer of BERT?
For example, if I run the below code,
```python
# Use BERT for mappin…
yuwon updated
4 years ago
-
Hi,
Thanks for the great work! My question is that, how to use the trained model to split a sentence. For example;
sentence: last month we went on vocation the trip was very hard but it was wor…
-
i.e. norm(v)=1, where v is the vector (512 dimensions) of a sentence
-
@linanqiu Thanks for the wonderful presentation of your program!
However, I want to know how this model can be used to predict the sentiment of a given sentence.
-
# Requirements
- [ ] Read about input embeddings technique (byte-pair encoding) used by Google's team on "Attention Is All You Need" paper.
- [ ] Design the input embeddings pipeline for **wmt 2014 e…
-
**6.5 Vector Compress Instruction** has the following text:
> The vector mask register specified by vs1 indicates which of the first vl elements of vector register group vs2 should be extracted and…
-
@lalitpagaria for getting document vectors we can use this
https://github.com/UKPLab/sentence-transformers
-
Hi!
The code is great!
I use this code to implement the paragraph2vec, and I found that there my be several iterations of the training.If we use the following code to train for several times like thi…
-
With vec2txt we should be able to get a reasonably useful sentence out of the average embeddings of a cluster. This could serve as the cluster label, or perhaps as guidance for summarizing the label.
…
-
Hi Yeon, may I ask the questions about the ranking strategy? From my understanding, the direct rank for contrastive learning actually has little help for the SimCSE model, however, it largely enhances…