Closed zhongshsh closed 7 months ago
Hey thanks for reporting, can you upgrade to the newest version of transformers 🤗
thx for your reply. After using pip install -U transformers
or using conda upgrade transformers
, same error still exists. here is the version info after the upgrade:
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-18-shopee-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.0
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
The config class used seems to be HFDeepSpeedConfig
vs HfTrainerDeepSpeedConfig
(which inherits from the latter.)
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
System Info
transformers
version: 4.30.0Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
run the repo https://github.com/stanleylsx/llms_tool in the mode of
rm_train
config.py
config.py
deepspeed --num_gpus 2 --master_port=9999 main.py
Then getting error
Expected behavior
not report error