shikras / shikra

Other
734 stars 46 forks source link

training on 8 V100 is too slow, shikra_pretrain_final19_stage2 nearly 800h。 Does anyone have a similar situation? #47

Open Anymake opened 1 year ago

Anymake commented 1 year ago

accelerate launch --num_processes 8 \ --main_process_port 23786 \ mllm/pipeline/finetune.py \ config/shikra_pretrain_final19_stage2.py \ --cfg-options model_args.model_name_or_path=../models/shikras/shikra-7b-0708 --overwrite_output_dir \ --per_device_train_batch_size 2

{'loss': 0.1921, 'learning_rate': 3.0703101013202335e-08, 'epoch': 0.0} {'loss': 0.1677, 'learning_rate': 6.140620202640467e-08, 'epoch': 0.0} {'loss': 0.1395, 'learning_rate': 9.2109303039607e-08, 'epoch': 0.0} {'loss': 0.1647, 'learning_rate': 1.2281240405280934e-07, 'epoch': 0.0} {'loss': 0.1434, 'learning_rate': 1.535155050660117e-07, 'epoch': 0.0} {'loss': 0.1707, 'learning_rate': 1.84218606079214e-07, 'epoch': 0.0} {'loss': 0.131, 'learning_rate': 2.1492170709241634e-07, 'epoch': 0.0} 0%| | 73/217125 [16:42<877:13:11, 14.55s/it]