huggingface / trl

Train transformer language models with reinforcement learning.
http://hf.co/docs/trl
Apache License 2.0
9.61k stars 1.21k forks source link

DPO training using multi GPU #958

Closed DopeorNope-Lee closed 6 months ago

DopeorNope-Lee commented 11 months ago

Here is my load model code....

`

model = AutoModelForCausalLM.from_pretrained( script_args.model_name_or_path, low_cpu_mem_usage=True, torch_dtype=torch.float16, load_in_4bit=True, device_map="auto", trust_remote_code=True)

model_ref = AutoModelForCausalLM.from_pretrained(
    script_args.model_name_or_path,
    low_cpu_mem_usage=True,
    torch_dtype=torch.float16,
    load_in_4bit=True,
    device_map="auto",
    trust_remote_code=True)
model.config.use_cache = False

`

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!

I have to try erase quantization, how ever it occurs OOM...

I'm training on RTX3090X4EA GPU environment. I need to split the GPU (for GPU memory), but if I put the dataset into DPO Trainer in Dataset format now, I get this error code, what should I do?

I'm fix example of DPO training code and using it.

harrison4ride commented 11 months ago

device_map="auto" only allows script runned on one GPU, can you replace it with {"": Accelerator().local_process_index}, and try again?

myeonghak commented 10 months ago

does the error occur in the dpo_loss method in dpo_trainer.py? I have faced the same error, I forced the tensors to be in the same GPU by adding .to(device) to 4 variables in the dpo_loss function. It might not be the best solution for the problem but at least I found it working.

example) .... def dpo_loss(self, policy_chosen_logps, policy_rejected_logps, reference_chosen_logps, reference_rejected_logps, reference_free=False):

# Define the device to use
device = torch.device('cuda:0')

# Move all tensors to the specified device
policy_chosen_logps = policy_chosen_logps.to(device)
policy_rejected_logps = policy_rejected_logps.to(device)
reference_chosen_logps = reference_chosen_logps.to(device)
reference_rejected_logps = reference_rejected_logps.to(device)

더 좋은 방법을 찾으시면 알려주세요 ^^; KOAT 재밌게 잘 봤습니다.

lvwerra commented 10 months ago

cc @kashif

DopeorNope-Lee commented 10 months ago

does the error occur in the dpo_loss method in dpo_trainer.py? I have faced the same error, I forced the tensors to be in the same GPU by adding .to(device) to 4 variables in the dpo_loss function. It might not be the best solution for the problem but at least I found it working.

example) .... def dpo_loss(self, policy_chosen_logps, policy_rejected_logps, reference_chosen_logps, reference_rejected_logps, reference_free=False):

# Define the device to use
device = torch.device('cuda:0')

# Move all tensors to the specified device
policy_chosen_logps = policy_chosen_logps.to(device)
policy_rejected_logps = policy_rejected_logps.to(device)
reference_chosen_logps = reference_chosen_logps.to(device)
reference_rejected_logps = reference_rejected_logps.to(device)

더 좋은 방법을 찾으시면 알려주세요 ^^; KOAT 재밌게 잘 봤습니다.

좋은 방법을 찾아서 공유드립니다. multi gpu일때, SFT모델을 refe 모델로 활용할때, load하지 않고, lora layer를 제거한채로 카피하여서 활용하는 방법입니다 ^^ 그러면, 잘 불러와지더라구요, 아마 레이어들이 다 다른 GPU에 할당되어서 이런 문제가 발생하는듯 하네요.

감사합니다 ^^

sd3ntato commented 9 months ago

hello,

I load the model like this and start training, as described in the test: https://github.com/huggingface/trl/blob/main/tests/test_dpo_trainer.py

here is the code:

bnb_config = BitsAndBytesConfig(
      load_in_4bit=True,
      bnb_4bit_use_double_quant=True,
      bnb_4bit_quant_type="nf4",
      bnb_4bit_compute_dtype=torch.bfloat16,
  )

  model = AutoModelForCausalLM.from_pretrained(
      args.model_name,
      use_cache=False if args.gradient_checkpointing else True,
      trust_remote_code=True,
      device_map="auto",
      quantization_config=bnb_config,
      use_auth_token=True,
  )

  output_dir = "/opt/ml/checkpoints/"
  training_args = TrainingArguments(
      do_eval=True,
      bf16=args.bf16,
      output_dir=output_dir,
      max_steps=args.max_steps,
      warmup_ratio=warmup_ratio,
      eval_steps=args.eval_steps,
      evaluation_strategy="steps",
      learning_rate=learning_rate,
      logging_steps=args.eval_steps,
      num_train_epochs=args.n_epochs,
      lr_scheduler_type=args.lr_scheduler_type,
      auto_find_batch_size=auto_find_batch_size,
      per_device_train_batch_size=args.train_batch_size,
      gradient_checkpointing=args.gradient_checkpointing,
      gradient_accumulation_steps=args.gradient_accumulation_steps,

      logging_strategy="steps",
      overwrite_output_dir=True,
      logging_dir=f"{output_dir}/logs",
  )

  trainer = DPOTrainer(
      beta=0.1,
      model=model,
      ref_model=None,
      args=training_args,
      tokenizer=tokenizer,
      eval_dataset=dataset_test,
      train_dataset=dataset_train,
      precompute_ref_log_probs=True,
      peft_config=LoraConfig(
          r=args.lora_r,
          lora_alpha=args.lora_alpha,
          target_modules=find_all_linear_names(model),
          lora_dropout=0.1,
          bias="none",
          task_type=TaskType.CAUSAL_LM,
      ),
  )

  trainer.train()

this is my requirements file:

wandb==0.15.10
transformers==4.36.2
trl @ git+https://github.com/huggingface/trl.git
diffusers==0.24.0
peft==0.6.2
accelerate
bitsandbytes
evaluate
unsloth[cu121] @ git+https://github.com/unslothai/unsloth.git

hyperparameters:

    "hyperparameters": {
        "bf16": "True",
        "eval_steps": 1,
        "gradient_accumulation_steps": 1,
        "learning_rate": 0.0001,
        "lora_alpha": 16,
        "lora_r": 16,
        "lr_scheduler_type": "constant_with_warmup",
        "max_steps": -1,
        "model_name": "mistralai/Mixtral-8x7B-v0.1",
        "n_epochs": 1,
        "train_batch_size": 1,
    },

I am runnning on g5.12xlarge machine, that has 4 A10G GPUs

finally the error:

Downloading shards: 100%|██████████| 19/19 [11:30<00:00, 32.23s/it]
Downloading shards: 100%|██████████| 19/19 [11:30<00:00, 36.36s/it]
Loading checkpoint shards:   0%|          | 0/19 [00:00<?, ?it/s]
Loading checkpoint shards:   5%|▌         | 1/19 [00:01<00:21,  1.19s/it]
Loading checkpoint shards:  11%|█         | 2/19 [00:02<00:19,  1.17s/it]
Loading checkpoint shards:  16%|█▌        | 3/19 [00:03<00:18,  1.17s/it]
Loading checkpoint shards:  21%|██        | 4/19 [00:04<00:17,  1.17s/it]
Loading checkpoint shards:  26%|██▋       | 5/19 [00:05<00:16,  1.17s/it]
Loading checkpoint shards:  32%|███▏      | 6/19 [00:07<00:15,  1.17s/it]
Loading checkpoint shards:  37%|███▋      | 7/19 [00:08<00:13,  1.16s/it]
Loading checkpoint shards:  42%|████▏     | 8/19 [00:09<00:12,  1.16s/it]
Loading checkpoint shards:  47%|████▋     | 9/19 [00:10<00:11,  1.17s/it]
Loading checkpoint shards:  53%|█████▎    | 10/19 [00:11<00:10,  1.16s/it]
Loading checkpoint shards:  58%|█████▊    | 11/19 [00:12<00:09,  1.17s/it]
Loading checkpoint shards:  63%|██████▎   | 12/19 [00:14<00:08,  1.17s/it]
Loading checkpoint shards:  68%|██████▊   | 13/19 [00:15<00:07,  1.18s/it]
Loading checkpoint shards:  74%|███████▎  | 14/19 [00:16<00:05,  1.18s/it]
Loading checkpoint shards:  79%|███████▉  | 15/19 [00:17<00:04,  1.18s/it]
Loading checkpoint shards:  84%|████████▍ | 16/19 [00:18<00:03,  1.18s/it]
Loading checkpoint shards:  89%|████████▉ | 17/19 [00:19<00:02,  1.17s/it]
Loading checkpoint shards:  95%|█████████▍| 18/19 [00:21<00:01,  1.18s/it]
Loading checkpoint shards: 100%|██████████| 19/19 [00:22<00:00,  1.13s/it]
Loading checkpoint shards: 100%|██████████| 19/19 [00:22<00:00,  1.17s/it]
generation_config.json:   0%|          | 0.00/116 [00:00<?, ?B/s]
generation_config.json: 100%|██████████| 116/116 [00:00<00:00, 703kB/s]
/opt/conda/lib/python3.10/site-packages/trl/trainer/dpo_trainer.py:260: UserWarning: When using DPODataCollatorWithPadding, you should set `max_length` in the DPOTrainer's init it will be set to `512` by default, but you should do it yourself in the future.
  warnings.warn(
/opt/conda/lib/python3.10/site-packages/trl/trainer/dpo_trainer.py:267: UserWarning: When using DPODataCollatorWithPadding, you should set `max_prompt_length` in the DPOTrainer's init it will be set to `128` by default, but you should do it yourself in the future.
  warnings.warn(
/opt/conda/lib/python3.10/site-packages/trl/trainer/dpo_trainer.py:291: UserWarning: When using DPODataCollatorWithPadding, you should set `remove_unused_columns=False` in your TrainingArguments we have set it for you, but you should do it yourself in the future.
  warnings.warn(
Map:   0%|          | 0/3 [00:00<?, ? examples/s]
Map:   0%|          | 0/3 [00:00<?, ? examples/s]
0%|          | 0/3 [00:00<?, ?it/s]
/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
  warnings.warn(
Traceback (most recent call last):
File "/opt/ml/code/run_dpo.py", line 250, in <module>
    main()
File "/opt/ml/code/run_dpo.py", line 235, in main
    training_function(run, args)
File "/opt/ml/code/run_dpo.py", line 182, in training_function
    trainer.train()
  File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1537, in train
    return inner_training_loop(
File "/opt/conda/lib/python3.10/site-packages/accelerate/utils/memory.py", line 136, in decorator
    return function(batch_size, *args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1854, in _inner_training_loop
    tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2723, in training_step
    loss = self.compute_loss(model, inputs)
  File "/opt/conda/lib/python3.10/site-packages/trl/trainer/dpo_trainer.py", line 975, in compute_loss
    loss, metrics = self.get_batch_loss_metrics(model, inputs, train_eval="train")
  File "/opt/conda/lib/python3.10/site-packages/trl/trainer/dpo_trainer.py", line 944, in get_batch_loss_metrics
    losses, chosen_rewards, rejected_rewards = self.dpo_loss(
File "/opt/conda/lib/python3.10/site-packages/trl/trainer/dpo_trainer.py", line 784, in dpo_loss
    logits = pi_logratios - ref_logratios
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cuda:0!
kashif commented 9 months ago

@sd3ntato can you also paste the command you use to run the training?

sd3ntato commented 9 months ago

hi, sagemaker runs

Command "/opt/conda/bin/python3.10 run_dpo.py --auto_find_batch_size True --base_job_name mistralai-Mixtral-8x7B-v0-1 --bf16 True --dataset_path /opt/ml/input/data/training --eval_steps 1 --gradient_accumulation_steps 1 --hf_token ... --hub_token ... --learning_rate 0.0001 --load_in_4bit True --lora_alpha 16 --lora_r 16 --lr_scheduler_type constant_with_warmup --max_steps -1 --model_name mistralai/Mixtral-8x7B-v0.1 --n_epochs 1 --train_batch_size 1 --use_peft True --valid_dataset_path... --wandb_dataset_test querlo_dpo_val --wandb_dataset_train querlo_dpo_train --wandb_project querlo-dpo --warmup_ratio 0.1"
raghavgarg97 commented 9 months ago

I tried to load on multiple GPUs as well in issue #1117 ..i didnt get tensor mismatch issue but got OOM...Using PEFT,I also got the same error..Mentioned it in the issue

Jerrrrykun commented 8 months ago

Same issue here! My error is RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:7! with 8*A100 80G.

I tried device_map={"": Accelerator().local_process_index} from @harrison4ride in loading the base model:

base_model = AutoModelForCausalLM.from_pretrained(
            model_name_or_path,
            # device_map='auto',
            device_map={"": Accelerator().local_process_index},
            quantization_config=bnb_config,
            attn_implementation="flash_attention_2",  # if flash-attn-2 installed
        )

and my DPOTrainer is:

trainer = DPOTrainer(
            model,
            ref_model=None,
            args=training_arguments,
            peft_config=peft_config,
            beta=0.1,
            train_dataset=train_dataset,
            tokenizer=tokenizer,
            max_length=6000,
            max_prompt_length=800
        )

But now I have another issue: RuntimeError: CUDA error: device-side assert triggered In detail, it shows:

.....
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [520,0,0], thread: [19,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [520,0,0], thread: [20,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [520,0,0], thread: [21,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [520,0,0], thread: [22,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [520,0,0], thread: [23,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [520,0,0], thread: [24,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
....

I found another issue #1117 that uses deepspeed zero 2 without peft config to solve this problem. I am wondering how can we use peft config while training in multi-gpus by DPOTrainer?

younesbelkada commented 8 months ago

Hi @Jerrrrykun Can you try out the latest dpo.py script: https://github.com/huggingface/trl/blob/main/examples/scripts/dpo.py I can confirm the multi-GPU + DPO using QLoRA and this accelerate config: https://github.com/huggingface/trl/blob/main/examples/accelerate_configs/multi_gpu.yaml works fine for us on a multi-gpu env using TRL main. Can you test that out and let us know if you face into any issue?

younesbelkada commented 8 months ago

Here is the bash file we use to run the DPO script: https://github.com/huggingface/trl/blob/main/commands/run_dpo.sh

Amin-Tajgardoon commented 8 months ago

Hi @Jerrrrykun Can you try out the latest dpo.py script: https://github.com/huggingface/trl/blob/main/examples/scripts/dpo.py I can confirm the multi-GPU + DPO using QLoRA and this accelerate config: https://github.com/huggingface/trl/blob/main/examples/accelerate_configs/multi_gpu.yaml works fine for us on a multi-gpu env using TRL main. Can you test that out and let us know if you face into any issue?

@younesbelkada Your code loads a copy of the model to each GPU core and runs OOM if the model does not fit on a single one. Is there a way to distribute the model among GPUs instead and not run into the RuntimeError: Expected all tensors to be on the same device, but found at least two devices ... error?

I'm trying apply DPO RLHF on the HF "mistralai/Mixtral-8x7B-Instruct-v0.1" model on an AWS g5 multi-gpu instance

younesbelkada commented 8 months ago

@Amin-Tajgardoon thanks ! It loads indeed 2 copies of the model is you don't train PEFT adapter. If a PEFT config is passed, model_ref is set to None:; https://github.com/huggingface/trl/blob/814930377cbd8c90849099358a7db128a98e8c99/examples/scripts/dpo.py#L146 thus with one copy of the model you can get both the active and reference logits --> this is what I meant by QLoRA. The underlying concept is similar than the one described in: https://huggingface.co/blog/trl-peft with 4-bit base model instead of 8-bit

Amin-Tajgardoon commented 8 months ago

@Amin-Tajgardoon thanks ! It loads indeed 2 copies of the model is you don't train PEFT adapter. If a PEFT config is passed, model_ref is set to None:;

https://github.com/huggingface/trl/blob/814930377cbd8c90849099358a7db128a98e8c99/examples/scripts/dpo.py#L146 thus with one copy of the model you can get both the active and reference logits --> this is what I meant by QLoRA. The underlying concept is similar than the one described in: https://huggingface.co/blog/trl-peft with 4-bit base model instead of 8-bit

@younesbelkada even with QLoRA and model_ref=None, larger models may not fit on a single GPU with 24GB mem. That's why I'm trying to split the model among multiple devices. But your code copies the same model on every GPU instead of distributing layers to different devices. Any ideas?

younesbelkada commented 8 months ago

true, then for distributing the model across different GPUs you need to load the model with device_map="auto" and run the training script with python xxx.py instead of accelerate launch xxx.py

datascientistwannabe1 commented 7 months ago

@Amin-Tajgardoon thanks ! It loads indeed 2 copies of the model is you don't train PEFT adapter. If a PEFT config is passed, model_ref is set to None:; https://github.com/huggingface/trl/blob/814930377cbd8c90849099358a7db128a98e8c99/examples/scripts/dpo.py#L146

thus with one copy of the model you can get both the active and reference logits --> this is what I meant by QLoRA. The underlying concept is similar than the one described in: https://huggingface.co/blog/trl-peft with 4-bit base model instead of 8-bit

@younesbelkada even with QLoRA and model_ref=None, larger models may not fit on a single GPU with 24GB mem. That's why I'm trying to split the model among multiple devices. But your code copies the same model on every GPU instead of distributing layers to different devices. Any ideas?

@Amin-Tajgardoon I am experiencing the same issue as you do. Did you manage to find a fix?

huiyeruzhou commented 7 months ago

true, then for distributing the model across different GPUs you need to load the model with device_map="auto" and run the training script with python xxx.py instead of accelerate launch xxx.py

I guess the key point is precompute_ref_log_probs=True in the configuration mentioned earlier. It appears this setting doesn't integrate smoothly with multi-GPU setups. To be more specific:

Executing deepspeed --num_gpus 2 dpo.py without moving the model to 'cuda' results in an error indicating that 'cpu' and 'cuda:0' are not on the same device. Running deepspeed --num_gpus 2 dpo.py and moving the model to 'cuda' results in an error stating 'cuda:0' and 'cuda:1' are not on the same device. Simply running python dpo.py results in an error: Default process group has not been initialized, please ensure to call init_process_group. Understanding the behavior of deepspeed and accelerate in these scenarios is somewhat challenging for me, but I'm inclined to think that the DPOTrainer.model fails to be distributed across multiple GPUs before the call to get_trainer_dataloader?

To work around this, I used deepspeed --num_gpus 1 dpo.py and stored the pre_computed dataset on disk by modifying "get_train_dataloader" in DPOTrainer. This approach proved effective as both generation and training proceeded smoothly without requiring an additional 7B model loaded into VRAM.

I think that detailed documentation on how DPO integrates with precompute_ref_log_probs would be incredibly beneficial. It would be more helpful if examples using 'accelerate' and 'deepspeed' were made available~

younesbelkada commented 7 months ago

Thanks! Hmm I see, indeed there might be some issues between DPO, DeepSpeed and precompute_ref_log_probs, I think @kashif had experienced these issues in the past and can probably give better insights than myself 🙏

pengwei715 commented 6 months ago

Hi team. I think the root cause of the issue is inside of the trainer class in the transformers package. The tr_loss is on the device of args.device. however, after compute_dpo_loss the tr_loss_step could be on different device. It's not harmful to do a .to when it updates tr_loss in the trainer class. I am opening a PR in the transformers package here. https://github.com/huggingface/transformers/pull/29695. Please let me know if this makes sense to you. Thanks. cced @guy1992l

kui253 commented 2 weeks ago

Here is the bash file we use to run the DPO script: https://github.com/huggingface/trl/blob/main/commands/run_dpo.sh

Hi, I use the config file and shell script, but still got a error,

Traceback (most recent call last):
  File "/home/www/TrainBaselines/run_dpo.py", line 401, in <module>
    trainer.train()
  File "/opt/miniconda3/envs/whd/lib/python3.10/site-packages/transformers/trainer.py", line 1938, in train
    return inner_training_loop(
  File "/opt/miniconda3/envs/whd/lib/python3.10/site-packages/transformers/trainer.py", line 2280, in _inner_training_loop
    tr_loss_step = self.training_step(model, inputs)
  File "/opt/miniconda3/envs/whd/lib/python3.10/site-packages/transformers/trainer.py", line 3350, in training_step
    self.accelerator.backward(loss, **kwargs)
  File "/opt/miniconda3/envs/whd/lib/python3.10/site-packages/accelerate/accelerator.py", line 2159, in backward
    loss.backward(**kwargs)
  File "/opt/miniconda3/envs/whd/lib/python3.10/site-packages/torch/_tensor.py", line 521, in backward
    torch.autograd.backward(
  File "/opt/miniconda3/envs/whd/lib/python3.10/site-packages/torch/autograd/__init__.py", line 289, in backward
    _engine_run_backward(
  File "/opt/miniconda3/envs/whd/lib/python3.10/site-packages/torch/autograd/graph.py", line 768, in _engine_run_backward
    return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
  File "/opt/miniconda3/envs/whd/lib/python3.10/site-packages/torch/autograd/function.py", line 306, in apply
    return user_fn(self, *args)
  File "/opt/miniconda3/envs/whd/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 313, in backward
    torch.autograd.backward(outputs_with_grad, args_with_grad)
  File "/opt/miniconda3/envs/whd/lib/python3.10/site-packages/torch/autograd/__init__.py", line 289, in backward
    _engine_run_backward(
  File "/opt/miniconda3/envs/whd/lib/python3.10/site-packages/torch/autograd/graph.py", line 768, in _engine_run_backward
    return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 127 with name base_model.model.model.layers.31.self_attn.v_proj.lora_B.default.weight has been marked as ready twice. This means that multiple autograd engine  hooks have fired for this particular parameter during this iteration.

which is similar to the case shown in here.

And I successfully launched my script by setting ddp_find_unused_parameters=False, but I don't know why it works.

Here is my launch script:

export NCCL_P2P_DISABLE=1
export CUDA_VISIBLE_DEVICES=4,5,6,7
accelerate launch --num_processes=4 --config_file multi_gpu.yaml run_dpo.py \
    --dataset_name $hf_dataset_home/trl-lib/ultrafeedback_binarized \
    --model_name_or_path $hf_home/meta-llama/Meta-Llama-3-8B-Instruct \
    --learning_rate 5.0e-6 \
    --num_train_epochs 1 \
    --per_device_train_batch_size 2 \
    --gradient_accumulation_steps 2 \
    --gradient_checkpointing \
    --logging_steps 25 \
    --eval_strategy steps \
    --eval_steps 50 \
    --output_dir $output_dir \
    --no_remove_unused_columns \
    --use_peft \
    --dataset_num_proc 16 \
    --lora_r 32 \
    --lora_alpha 16 > $log_dir/$cur_time.log 2>&1 &

Please let me know if you have some idea. Thanks a lot.🙏