huggingface / nanotron

Minimalistic large language model 3D-parallelism training
Apache License 2.0
1.14k stars 107 forks source link

[Question] Modification for Performing Fine-Tuning #57

Open allanj opened 7 months ago

allanj commented 7 months ago

If I'm trying to perform fine-tuning instead of language model training, for the following requirements:

  1. Given the input ids $x$, train the loss of the output sequence $y$. Different from LM training, I don't enforce loss on the input tokens here.

I think I have to modify the dataloader.py

  1. The packing function by group_texts seems not applicable as I'm fine-tuning here. (but maybe this is minor concern)
  2. How to disable the loss on the input tokens? I think I need a different collator rather than DataCollatorForCLM.

I'm not sure this should be all modifications? Or are there better suggestions that I do not have to revise the source code.

seanexp commented 7 months ago

I need this feature too and I believe we need to shard pretrained checkpoints into nanotron format.

allanj commented 7 months ago

I need this feature too and I believe we need to shard pretrained checkpoints into nanotron format.

That seems a different problem. Am I understanding it correctly? are you talking about the model checkpoint format?

allanj commented 6 months ago

Should I just modify the input mask?