QiaolingChen00 / NUS_plp_project

0 stars 0 forks source link

Fine Tune #3

Open QiaolingChen00 opened 1 year ago

QiaolingChen00 commented 1 year ago

Hardware : 12288MiB * 8

chenqiaoling@oneflow-26:~/oneflow/build$ nvidia-smi -L
GPU 0: NVIDIA GeForce RTX 3080 Ti (UUID: GPU-e66c2ec9-a2a0-dcd6-161d-371f88995ac6)
GPU 1: NVIDIA GeForce RTX 3080 Ti (UUID: GPU-a23a56c0-ae2c-65ef-429a-23585167cc9d)
GPU 2: NVIDIA GeForce RTX 3080 Ti (UUID: GPU-60714591-6790-05ac-d146-633704154f8e)
GPU 3: NVIDIA GeForce RTX 3080 Ti (UUID: GPU-d7f7d73f-86a2-46b2-87be-88f6a6c43a7d)
GPU 4: NVIDIA GeForce RTX 3080 Ti (UUID: GPU-2242ec15-450b-3089-2c58-ee2dbefe4554)
GPU 5: NVIDIA GeForce RTX 3080 Ti (UUID: GPU-623ce0a8-09ed-03fd-5016-23d5102b4671)
GPU 6: NVIDIA GeForce RTX 3080 Ti (UUID: GPU-aae4cbe7-5c4c-63a3-750e-f8f72ce2e0f3)
GPU 7: NVIDIA GeForce RTX 3080 Ti (UUID: GPU-1dd4b6a6-2d1e-8d4a-ce84-96a911346c8b)

command

python3 finetune.py \
    --base_model 'decapoda-research/llama-7b-hf' \
    --data_path '/data/home/chenqiaoling/alpaca-lora/my.json' \
    --output_dir './lora-alpaca' \
    --batch_size 32 \
    --micro_batch_size 4 \
    --num_epochs 30 \
    --learning_rate 1e-4 \
    --cutoff_len 512 \
    --val_set_size 500 \
    --lora_r 8 \
    --lora_alpha 16 \
    --lora_dropout 0.05 \
    --lora_target_modules '[q_proj,v_proj]' \
    --train_on_inputs \
    --group_by_length