jshilong / GPT4RoI

GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest
Other
506 stars 25 forks source link

If I want to continue fine-tuning on your GPT4RoI weight node, how should you design the parameters for train.sh #15

Open hangzeli08 opened 1 year ago

hangzeli08 commented 1 year ago

How can I design WORKDIR and STAGE1WORKDIR if I want to continue fine-tuning on your GPT4RoI weight node,

jshilong commented 1 year ago

You should download the weights and merge them with llama. Then you can do this

mkdir -p exp_name/checkpoint-0

Then move the checkpoints to checkpoint-0

then

WORKDIR=exp_name

export PYTHONPATH=`pwd`:$PYTHONPATH

torchrun --nnodes=1 --nproc_per_node=8 --master_port=25001 \
    gpt4roi/train/train_mem.py \
    --model_name_or_path path_to_vicuna-7b \
    --vision_tower openai/clip-vit-large-patch14 \
    --pretrain_mm_mlp_adapter LLaVA-7b-pretrain-projector-v0-CC3M-595K-original_caption.bin \
    --dataset_config ./gpt4roi/configs/stage2.py \
    --mm_vision_select_layer -2 \
    --mm_use_im_start_end True \
    --bf16 True \
    --output_dir $WORKDIR \
    --num_train_epochs 2 \
    --per_device_train_batch_size 2 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 1 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 3000 \
    --save_total_limit 1 \
    --learning_rate 2e-5 \
    --weight_decay 0. \
    --warmup_ratio 0.003 \
    --warmup_steps 3000 \
    --fsdp "full_shard auto_wrap" \
    --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --tf32 True \
    --model_max_length 2048 \
    --gradient_checkpointing True \
    --lazy_preprocess True \
    --report_to "none" \
    --seed 0 \
    | tee $WORKDIR/train.log

You can find this logic at https://github.com/jshilong/GPT4RoI/blob/0827109da4716d01f168bf5fa682bd0e1a874d67/gpt4roi/train/train.py#L708