prajdabre / yanmtt

Yet Another Neural Machine Translation Toolkit
MIT License
173 stars 32 forks source link

How to perform Inference using the fine-tuned model ? #30

Closed nikhilbyte closed 2 years ago

nikhilbyte commented 2 years ago

Hi @prajdabre, I pre-trained a very small BART model on a new language and the pre-training is almost done. I'm going to fine-tune the model on a downstream task and would want to perform inference using that fine-tuned model. I can see the fine-tuning code in your repository but not the inference code. Can you please tell me how to perform inference using the fine-tuned model?

Also, is it possible to use Huggingface Pipeline with the trained model to do the inference like the rest of the transformer models?

Thank you

prajdabre commented 2 years ago

Decoding command example is given here: https://github.com/prajdabre/yanmtt/blob/main/examples/decode_or_probe_model.sh

You can't directly use the model with HF pipeline. I may write a script for conversion to HF compatible format.