Open lhohoz opened 10 months ago
👋 there is a tutorial on how you have to compile your model to onnx. Basically you should train your model with pytorch and then convert the model to onnx. That would suffice as transformers.js mimics transformers using same tokenizers, etc...
Hi there @lhohoz 👋 As stated in the model card of Xenova/nllb-200-distilled-600M, it is a fork of https://huggingface.co/facebook/nllb-200-distilled-600M (original model), just with ONNX weights to be compatible with transformers.js. For that reason, you can finetune the original model with the python transformers library, and then afterwards, convert it to ONNX to be run in transformers.js 😇
Thanks all, will try
Question
Hello, If we have a large dataset in our domain, can we use it to fine-tune the hosted pretrained models(for example: Xenova/nllb-200-distilled-600M) with optimum? or is it possible to convert our own translation Pytorch model to ONNX which can be compatible with transformer.js?