-
so in example :
word_embedding_model = models.Transformer('bert-base-uncased')
cnn = models.CNN(in_word_embedding_dimension=word_embedding_model.get_word_embedding_dimension(), out_channels=2…
-
I see there is the features 'token_weights_sum'. Is there any example for feeding this into the model?
-
Hello!
Thank you very much for the work being done on this package. It has helped me tremendously.
I am finetuning the model on custom data, using the triplet function. It has worked like a char…
-
Hi,
I would like to create my own domain-specific "stsb" datset to further improve performance.
I have a 500 GB domain specific text corpus and want to use / label some of the sentence pairs.
Do …
-
Hi,
Given a model in {BERT, XLM, .XLnet, ...}, do you have a dictionary of estimated best number of epochs for training your Siamese Network on NLI dataset?
Else, what would be your suggestion…
-
Did you try SimCSE's supervised training objective in-domain on USEB?
Would be interesting to compare to SBERT-supervised...!
-
Hi,
Expected behaviour: When I create a SentenceTransformer model by importing in a HF model and fine tuning it with the NLI code example, it should work when encodding text.
Actual behaviour: C…
-
Hi, firstly thank you so much for sharing us with such awesome works. I am trying to train semantic textual similarity on my own dataset which includes sentence pairs of robotic task descriptions. For…
-
The codes in line 53 in LabelAccuracyEvaluator.py :
_, prediction = model(features[0])
It does not work. When I run this code,error occurs.
-
The goal of this segment is to create meaningful benchmark subsets with a minimal set of tasks.
I believe the steps are as follows:
1) construct an experimental subset. If people agree I can con…