jiaweizzhao / GaLore

GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Apache License 2.0
1.38k stars 144 forks source link

Galore is not supported for Deepseed Zero3 #23

Closed youganglyu closed 6 months ago

youganglyu commented 6 months ago

Error information

/root/anaconda3/envs/new_llm/lib/python3.10/site-packages/accelerate/accelerator.py:432: FutureWarning: Passing the following arguments to `Accelerator` is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches', 'even_batches', 'use_seedable_sampler']). Please pass an `accelerate.DataLoaderConfiguration` instead: 
dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=False, even_batches=True, use_seedable_sampler=True)
  warnings.warn(
Traceback (most recent call last):
  File "/root/paddlejob/workspace/20240315/0_llm/new_llm/LLaMA-Factory-main/src/train_bash.py", line 14, in <module>
    main()
  File "/root/paddlejob/workspace/20240315/0_llm/new_llm/LLaMA-Factory-main/src/train_bash.py", line 5, in main
    run_exp()
  File "/root/paddlejob/workspace/20240315/0_llm/new_llm/LLaMA-Factory-main/src/llmtuner/train/tuner.py", line 32, in run_exp
    run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
  File "/root/paddlejob/workspace/20240315/0_llm/new_llm/LLaMA-Factory-main/src/llmtuner/train/sft/workflow.py", line 54, in run_sft
    trainer = CustomSeq2SeqTrainer(
  File "/root/anaconda3/envs/new_llm/lib/python3.10/site-packages/transformers/trainer_seq2seq.py", line 56, in __init__
    super().__init__(
  File "/root/anaconda3/envs/new_llm/lib/python3.10/site-packages/transformers/trainer.py", line 527, in __init__
    raise RuntimeError(
RuntimeError: Passing `optimizers` is not allowed if Deepspeed or PyTorch FSDP is enabled. You should subclass `Trainer` and override the `create_optimizer_and_scheduler` method.

deepspeed zero3 config

{
    "bf16": {
        "enabled": "auto"
    },

    "zero_optimization": {
        "stage": 3,
        "offload_optimizer": {
            "device": "cpu",
            "pin_memory": true
        },
        "offload_param": {
            "device": "cpu",
            "pin_memory": true
        },
        "overlap_comm": true,
        "contiguous_gradients": true,
        "sub_group_size": 1e9,
        "reduce_bucket_size": "auto",
        "stage3_prefetch_bucket_size": "auto",
        "stage3_param_persistence_threshold": "auto",
        "stage3_max_live_parameters": 1e9,
        "stage3_max_reuse_distance": 1e9,
        "stage3_gather_16bit_weights_on_model_save": true
    },

    "gradient_accumulation_steps": "auto",
    "gradient_clipping": "auto",
    "steps_per_print": 2000,
    "train_batch_size": "auto",
    "train_micro_batch_size_per_gpu": "auto",
    "wall_clock_breakdown": false
}
jiaweizzhao commented 6 months ago

We are working on the integration, please watch the progress here: #2