Open deepankar27 opened 3 years ago
Bi-Encoder process the two inputs independently, produces embeddings, and then computes the cosine similarity.
Cross-Encoder concatenates the inputs, passes it through the transformer network, takes the CLS token output and performs a down-projection to 1 dimension which is the output
@nreimers Thank you for prompt reply. So, the semantic score as label is getting used for validation only?
Sometimes for validation, sometimes also for the training. Depends on the specific script
All right..!!! It would be great if you can tell for which script we would use it & I will take it up from there. My intent was to get it clarified that are we using the scores for fine-tuning any models or not, that's all.
Hello Team,
I have one small confusion, in both cross-encoder & bi-encoder you are taking semantic score as a labels but my confusion is how it's getting mapped or used during training process. Can you please throw some light on this?
InputExample(texts=['sentence1', 'sentence2'], label=0.3),