-
Hi all,
The new [Performer](https://ai.googleblog.com/2020/10/rethinking-attention-with-performers.html) model may enable us to embed longer documents using the same SBERT method. HuggingFace is…
-
I am working on windows,where I encountered this type error it states that INSTRUCTOR._load_sbert_model() got an unexpected argument 'token', I guess this error occurred due to version issues of sente…
-
## 집현전 중급반 스터디
- 2022년 5월 29일 일요일 9시
- 허은진님 원혜진님 정민지님 최석민님 발표
- 논문 링크: https://arxiv.org/abs/1908.10084
> ### Abstract
> BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new …
-
Would be really nice to have WebGPU support for running other transformer models like sbert and embeddings models. For example, here's [transformer.js](https://xenova.github.io/transformers.js/)
Th…
-
Hi,
I am trying to use "gpt2" hugginface model with the SBERT code, where it gives NoneType error in the get_sentence_features function.
word_embedding_model = models.Transformer('gpt2')
I read a…
-
How could SBERT (before fine-tuning) work without supervision? I don't quite understand the content in the paper section 4.
I mean, is it fair to compare BERT without fine-tuning and SBERT after fi…
-
Hello,
I am fine-tuning an biencoder SBERT model on domain specific data for semantic similarity. There is no loss value posted by the `fit ` function from the package. Any idea how to know if the mo…
-
Although Bumblebee already supports many sentence transformer models, the MPNet based ones are currently not usable. Since these are also quite popular on huggingface and also perform the best accordi…
-
Hi,
I found result on STS Benchmark without training sbert -
sbert model include bert layer + pooling layer and I evaluate it using sentEval.
and the same evaluation on USE. Below are the results…
-
We need a fast way to find related documents to the given user query. I suggest a combination of:
- TF-IDF vector space or the enhanced BM25 to improve speed
- Sentence BERT to embed sentences of e…