Dear the authors,
Thanks for the great work!~ I am trying to reproduce the shikra-7b model by training from vicuna-7b according to the descriptions in the SHIKRA paper. And I'm confused about some details of the parameter setting. Would you please provide me with some solutions to the following questions?
Should I use vicuna-7b as the training init weight? If yes, should I use the raw vicuna-7b or should I replace the "config.json, generation_config.json, special_tokens_map.json, tokenizer_config.json" with that of shikra-7b? Since the above json files are not the same between vicuna-7b and shikra-7b.
What config files should I use at stage1 and stage2? _shikra_pretrain_concat8stage1.py for stage1 and _shikra_pretrain_final19stage2.py for stage2? And could you tell me what's the usage of _shikra_pretrain_concat3stage0.py? Only stage 1 and stage2 are introduced in the paper.
Are the two stages trained seperately which means we train the stage1 first and save the model, then resume from the previous model to train the stage2? And what's the num_train_epochs for stage2? Does it has the same setting of 1.5 epochs as stage1?
Dear the authors, Thanks for the great work!~ I am trying to reproduce the shikra-7b model by training from vicuna-7b according to the descriptions in the SHIKRA paper. And I'm confused about some details of the parameter setting. Would you please provide me with some solutions to the following questions?
Should I use vicuna-7b as the training init weight? If yes, should I use the raw vicuna-7b or should I replace the "config.json, generation_config.json, special_tokens_map.json, tokenizer_config.json" with that of shikra-7b? Since the above json files are not the same between vicuna-7b and shikra-7b.
What config files should I use at stage1 and stage2? _shikra_pretrain_concat8stage1.py for stage1 and _shikra_pretrain_final19stage2.py for stage2? And could you tell me what's the usage of _shikra_pretrain_concat3stage0.py? Only stage 1 and stage2 are introduced in the paper.
Are the two stages trained seperately which means we train the stage1 first and save the model, then resume from the previous model to train the stage2? And what's the
num_train_epochs
for stage2? Does it has the same setting of 1.5 epochs as stage1?Really Looking forward to you reply~