huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
132.95k stars 26.53k forks source link

transformers Trainer has no attribute 'deepspeed_plugin' #27023

Closed shibing624 closed 11 months ago

shibing624 commented 11 months ago

System Info

Traceback (most recent call last):
  File "/apdcephfs_teg_2/share_1367250/flemingxu/MedicalGPT/supervised_finetuning.py", line 1307, in <module>
    main()
  File "/apdcephfs_teg_2/share_1367250/flemingxu/MedicalGPT/supervised_finetuning.py", line 1248, in main
    trainer = SavePeftModelTrainer(
  File "/apdcephfs_teg_2/share_1367250/flemingxu/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 335, in __init__
    self.create_accelerator_and_postprocess()
  File "/apdcephfs_teg_2/share_1367250/flemingxu/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3853, in create_accelerator_and_postprocess
    deepspeed_plugin=self.args.deepspeed_plugin,
AttributeError: 'PeftArguments' object has no attribute 'deepspeed_plugin'
(py3.10) 

Who can help?

No response

Information

Tasks

Reproduction

transformers==4.35.0.dev0 , run https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py with llama2 model

Expected behavior

success

muellerzr commented 11 months ago

@shibing624 we need more info. What is your accelerate env? How are you running the script exactly? Please provide us with these critical details for us to reproduce

shibing624 commented 11 months ago

accelerate env:

- `Accelerate` version: 0.23.0
- Platform: Linux-5.4.119-1-tlinux4-0009.3-x86_64-with-glibc2.17
- Python version: 3.10.11
- Numpy version: 1.24.4
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- PyTorch XPU available: False
- PyTorch NPU available: False
- System RAM: 1006.96 GB
- GPU type: A100-SXM4-40GB
- `Accelerate` default config:
        - compute_environment: LOCAL_MACHINE
        - distributed_type: MULTI_GPU
        - mixed_precision: fp16
        - use_cpu: False
        - debug: False
        - num_processes: 8
        - machine_rank: 0
        - num_machines: 1
        - gpu_ids: all
        - rdzv_backend: static
        - same_network: True
        - main_training_function: main
        - downcast_bf16: no
        - tpu_use_cluster: False
        - tpu_use_sudo: False
        - tpu_env: []

ds_report:

DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
      runtime if needed. Op compatibility means that your system
      meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
 [WARNING]  async_io requires the dev libaio .so object and headers but these were not found.
 [WARNING]  async_io: please install the libaio-dev package with apt
 [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
 [WARNING]  Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
 [WARNING]  sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0
 [WARNING]  using untested triton version (2.0.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/apdcephfs_teg_2/share_1367250/flemingxu/miniconda3/envs/py3.10/lib/python3.10/site-packages/torch']
torch version .................... 2.0.1+cu117
deepspeed install path ........... ['/apdcephfs_teg_2/share_1367250/flemingxu/miniconda3/envs/py3.10/lib/python3.10/site-packages/deepspeed']
deepspeed info ................... 0.11.1, unknown, unknown
torch cuda version ............... 11.7
torch hip version ................ None
nvcc version ..................... 11.7
deepspeed wheel compiled w. ...... torch 2.0, cuda 11.7
shared memory (/dev/shm) size .... 503.48 GB

my training arguments is :

@dataclass
class PeftArguments(TrainingArguments):
    use_peft: bool = field(default=True, metadata={"help": "Whether to use peft"})
    target_modules: Optional[str] = field(default="all")
    lora_rank: Optional[int] = field(default=8)
    lora_dropout: Optional[float] = field(default=0.05)
    lora_alpha: Optional[float] = field(default=32.0)
    modules_to_save: Optional[str] = field(default=None)
    peft_path: Optional[str] = field(default=None, metadata={"help": "The path to the peft model"})
    qlora: bool = field(default=False, metadata={"help": "Whether to use qlora"})
    load_in_kbits: Optional[int] = field(default=None, metadata={"help": "Kbits to train the model, value is 4, 8"})
    model_max_length: int = field(
        default=512,
        metadata={"help": "Maximum sequence length. suggest value is 8192 * 4, 8192 * 2, 8192, 4096, 2048, 1024, 512"}
    )
shibing624 commented 11 months ago

i am try to fix it, and i add deepspeed_plugin and debug to PeftArguments, the bug fixed.

    deepspeed_plugin: Optional[str] = field(default=None)
    debug: Optional[str] = field(
        default="",
        metadata={
            "help": (
                "Whether or not to enable debug mode. default is '', "
                "`underflow_overflow` (Detect underflow and overflow in activations and weights), "
            )
        },
    )

my repo fix the bug: https://github.com/shibing624/MedicalGPT/commit/b08be905d7f2b98a3e62c57bb6e8b345c0611805

reason: the bug is here: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1174C30-L1174C30

i do not use deepspeed, bug it run here. i do not know how to fix the original transformers trainer.py

muellerzr commented 11 months ago

The true solution is we should set this to None by default in TrainingArguments, there's a method for making it a hidden attribute, I'll look into this unless you want to @shibing624