Closed molokanov50 closed 1 year ago
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
There still was no help from the forums. See https://discuss.huggingface.co/t/valueerror-in-finetuning-nllb/35533
System Info
transformers
version: 4.21.1Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
It is surprising why there is still no example of finetuning any of NLLB models (at least, the smallest one) in a huggingface transformers environment. So I have followed this guide and adapted the code to my case, namely,
nllb-200-distilled-600M
. My custom train and eval datasets I want to finetunenllb-200-distilled-600M
on consist of 2 entries each, see my attached code. Running this code gives meValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds
.Expected behavior
A set of finetuned model's files in my
output_dir