shikras / shikra

Other
710 stars 44 forks source link

Question about the training parameter setting at stage1 and stage2 #37

Open Lanxin1011 opened 10 months ago

Lanxin1011 commented 10 months ago

Dear the authors, Thanks for the great work!~ I am trying to reproduce the shikra-7b model by training from vicuna-7b according to the descriptions in the SHIKRA paper. And I'm confused about some details of the parameter setting. Would you please provide me with some solutions to the following questions?

  1. Should I use vicuna-7b as the training init weight? If yes, should I use the raw vicuna-7b or should I replace the "config.json, generation_config.json, special_tokens_map.json, tokenizer_config.json" with that of shikra-7b? Since the above json files are not the same between vicuna-7b and shikra-7b.

  2. What config files should I use at stage1 and stage2? _shikra_pretrain_concat8stage1.py for stage1 and _shikra_pretrain_final19stage2.py for stage2? And could you tell me what's the usage of _shikra_pretrain_concat3stage0.py? Only stage 1 and stage2 are introduced in the paper.

  3. Are the two stages trained seperately which means we train the stage1 first and save the model, then resume from the previous model to train the stage2? And what's the num_train_epochs for stage2? Does it has the same setting of 1.5 epochs as stage1?

    Really Looking forward to you reply~

Anymake commented 9 months ago

+1

harrytea commented 9 months ago

+1

GaoXiaoshan commented 6 months ago

+1