Closed ud2195 closed 4 years ago
I think you just need to try some more hyperparameters. In particular, try a lower learning rate (I'm a fan of 1e-5
.), and set "correct_bias": true
in the parameters for the optimizer.
Is it possible that your training set is really unbalanced, and that's why you get high accuracy with only one kind of output?
This issue is being closed due to lack of activity. If you think it still needs to be addressed, please comment on this thread 👇
Hi, I followed the exact config file given here https://github.com/allenai/allennlp-models/blob/master/training_config/pair_classification/snli_roberta.jsonnet just changed
max_len
in my config file and now it looks like this:-The objective is to do Textual entailment using roBERTa , but after training my model for 2 epochs the accuracy(roughly 79%) hardly changed and then upon trying my model to predict it strangely predicted the same label for all the instances present in the test data.
i have a few doubts , The
model_type:basic_classifier
mentioned in default config is it right ? doesntbasic_classifier
implement a normal text classifier ?Code i am using for prediction-
if the
model_type
here is wrong then whatmodel_type
should i specify inplace ofbasic_classifier
to do textual entailment with roBERTa? A sample config file for doing entailment with roBERTa would really be helpful Any help will be appreciated ! @epwalsh @matt-gardner