-
Robertabase下载论文提供的模型测试集结果为78.99,少于论文报告的79.23。从头训练只能达到78.3
-
Multitask training example would be helpful.
-
I have been trying to figure out how to build sentence embeddings by leveraging the bi-lstm layer. In the folder "average_word_embeddings" i use the example in `training_stsbenchmark_bilstm.py` and t…
-
I am trying to reproduce the codenser pretraining results. I evaluate the checkpoint on the sts-b task with sentence-transformer, but the results are different.
(1)bert-base-uncased
2022-01-03 17:07…
-
Hello, and thank you for this useful code! I tried to reproduce the unsupervisd BERT+SimCSE results, but failed. My environment setup is as follows:
pytorch=1.7.1
cudatoolkit=11.1
Single RTX 309…
-
Hi, I'm trying to fine-tune cross-encoder for textual similarity, following the example provided in "cross-encoder/training_stsbenchmark.py", and setting num_label =1.
However, I get confused abo…
-
Thank you for your excellent work!Can you share that how much resources and time did you spend on pre-training condenser and cocondenser? And what's the batch and epoch were used?
-
Hi, folks!
Thank you very much for the hard work (^^)
I have a question on how to reproduce the results -- not that I am aiming to spot the differences, just making sure that I am running the code …
-
Hi,
For the need of exporting transformer model to ONNX format for inference, I use a multilingual sentence-transformer model based on the HF transformers library (separate tokenizer and model + mean…
-
Amazing work ^^ and thanks for your beautiful codes, I can easily reproduce the results. However, I got a small question, since the results in STS are reported averagly, how can I see the performance …