If I use SoftmaxLoss in NLI example as shown in https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/nli/training_nli.py , during training the base model Transformer+pooling is sent to https://github.com/UKPLab/sentence-transformers/blob/master/sentence_transformers/losses/SoftmaxLoss.py and it adds a nn.Linear layer with softmax activation on top of the base model . My evaluator is BinaryClassificationEvaluator . During evaluation instead of getting labeled output, the evaluator gets embedding for source-target sentence pairs and returns best threshold for various methods such as cosine-distance , dot similarity etc . Using that threshold for inference works fine in my case. But my question is, if there is any way to get labeled output from saved sentencetransformer(saved checkpoints during training) model .
I am also trying to use softmax loss to make a two-category model, I think you can read the saved model and then get the probability of the label through the output in softmax loss.
If I use
SoftmaxLoss
in NLI example as shown in https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/nli/training_nli.py , during training the base modelTransformer+pooling
is sent to https://github.com/UKPLab/sentence-transformers/blob/master/sentence_transformers/losses/SoftmaxLoss.py and it adds ann.Linear
layer with softmax activation on top of the base model . My evaluator isBinaryClassificationEvaluator
. During evaluation instead of getting labeled output, the evaluator gets embedding for source-target sentence pairs and returns best threshold for various methods such as cosine-distance , dot similarity etc . Using that threshold for inference works fine in my case. But my question is, if there is any way to get labeled output from saved sentencetransformer(saved checkpoints during training) model .