Closed MiriamFarber closed 3 years ago
Hi, this Kaggle notebook shows a very concise way to efficiently train/predict Huggingface's XLMRoberta
(which is the same format as Roberta
) . Hope it help!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I'd like to perform inference loop for the following roberta model:
on a large set of pairs of sentences (couple of hundred thousands). I wanted to use
model.predict
and specify batch size, but there is no way to pass the below inputs (encoded_data is tokenization of the input data) tomodel.predict
So what is the alternative way to do that?