Open allanj opened 7 months ago
I need this feature too and I believe we need to shard pretrained checkpoints into nanotron format.
I need this feature too and I believe we need to shard pretrained checkpoints into nanotron format.
That seems a different problem. Am I understanding it correctly? are you talking about the model checkpoint format?
Should I just modify the input mask?
If I'm trying to perform fine-tuning instead of language model training, for the following requirements:
I think I have to modify the
dataloader.py
group_texts
seems not applicable as I'm fine-tuning here. (but maybe this is minor concern)DataCollatorForCLM
.I'm not sure this should be all modifications? Or are there better suggestions that I do not have to revise the source code.