UKPLab / sentence-transformers

State-of-the-Art Text Embeddings
https://www.sbert.net
Apache License 2.0
15.36k stars 2.49k forks source link

My Model seems like doesn't learn anything #757

Open minyoung90 opened 3 years ago

minyoung90 commented 3 years ago

Thanks for great work!

My code based on https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/nli/training_nli.py and https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/sts/training_stsbenchmark.py (every settings are same as above codes)

I used 'monologg/kobert' for embedding model and, used 'https://github.com/kakaobrain/KorNLUDatasets' for datasets.

cosine_pearson and cosine_spearman value vibrates between 0.29 and 0.31. and never get higher over 0.32 but I think it should be over 0.60 at least!

I got 20 epochs for NLI task, 100 epoches for STS task each. but result are same stuck in under 0.32.

Any ideas? I want to get my koeran sbert and I will use it as teacher model for multilingual training!

nreimers commented 3 years ago

Yes, I got quite good performance when training on that dataset. I would assume that there is an issue how the input examples are created, e.g. that they have the wrong data.

Try to inspect the InputExamples for your train & dev set to see if they make sense (are correct). Maybe you read the wrong column or something like this from their CSV files.

minyoung90 commented 3 years ago

Thanks! I will try it!