Facico / Chinese-Vicuna

Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca
https://github.com/Facico/Chinese-Vicuna
Apache License 2.0
4.14k stars 425 forks source link

transformers和pydantic问题 #238

Open ww0o0 opened 1 year ago

ww0o0 commented 1 year ago

1、你使用了哪个脚本、使用的什么命令 使用的bash scripts/finetune.sh 2、你的参数是什么(脚本参数、命令参数) 参数 TOT_CUDA="0,1,3" CUDAs=(${TOT_CUDA//,/ }) CUDA_NUM=${#CUDAs[@]} PORT="12345"

DATA_PATH="data/newfl_data.json" #"../dataset/instruction/guanaco_non_chat_mini_52K-utf8.json" #"./sample/merge_sample.json" OUTPUT_PATH="lora-Vicuna" MODEL_PATH="decapoda-research/llama-7b-hf" lora_checkpoint="./lora-Vicuna/checkpoint-11600" TEST_SIZE=1

CUDA_VISIBLE_DEVICES=${TOT_CUDA} torchrun --nproc_per_node=$CUDA_NUM --master_port=$PORT finetune.py \ --data_path $DATA_PATH \ --output_path $OUTPUT_PATH \ --model_path $MODEL_PATH \ --eval_steps 200 \ --save_steps 200 \ --test_size $TEST_SIZE 3、你是否修改过我们的代码 没有修改 4、你用的哪个数据集 法律领域

配置环境时都是按照readme操作的

然后你也可以从运行的角度来描述你的问题: 1、报错信息是什么,是哪个代码的报错(可以将完整的报错信息都发给我们) 报错信息 QQ图片20230711153945

Traceback (most recent call last): File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1146, in _get_module return importlib.import_module("." + module_name, self.name) File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/importlib/init.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1030, in _gcd_import File "", line 1007, in _find_and_load File "", line 986, in _find_and_load_unlocked File "", line 680, in _load_unlocked File "", line 850, in exec_module File "", line 228, in _call_with_frames_removed File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 31, in from ...modeling_utils import PreTrainedModel File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/transformers/modeling_utils.py", line 37, in from .deepspeed import deepspeed_config, is_deepspeed_zero3_enabled File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/transformers/deepspeed.py", line 38, in from accelerate.utils.deepspeed import HfDeepSpeedConfig as DeepSpeedConfig File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/accelerate/init.py", line 7, in from .accelerator import Accelerator File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/accelerate/accelerator.py", line 27, in from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/accelerate/checkpointing.py", line 24, in from .utils import ( File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/accelerate/utils/init.py", line 122, in from .other import ( File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/accelerate/utils/other.py", line 27, in from deepspeed import DeepSpeedEngine File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/deepspeed/init.py", line 15, in from . import module_inject File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/deepspeed/module_inject/init.py", line 3, in from .replace_module import replace_transformer_layer, revert_transformer_layer, ReplaceWithTensorSlicing, GroupQuantizer, generic_injection File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/deepspeed/module_inject/replace_module.py", line 803, in from ..pipe import PipelineModule File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/deepspeed/pipe/init.py", line 3, in from ..runtime.pipe import PipelineModule, LayerSpec, TiedLayerSpec File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/deepspeed/runtime/pipe/init.py", line 3, in from .module import PipelineModule, LayerSpec, TiedLayerSpec File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/deepspeed/runtime/pipe/module.py", line 16, in from ..activation_checkpointing import checkpointing File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/deepspeed/runtime/activation_checkpointing/checkpointing.py", line 25, in from deepspeed.runtime.config import DeepSpeedConfig File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/deepspeed/runtime/config.py", line 30, in from ..monitor.config import get_monitor_config File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/deepspeed/monitor/config.py", line 70, in class DeepSpeedMonitorConfig(DeepSpeedConfigModel): File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/deepspeed/monitor/config.py", line 82, in DeepSpeedMonitorConfig def check_enabled(cls, values): File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/pydantic/deprecated/class_validators.py", line 222, in root_validator return root_validator()(*__args) # type: ignore File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/pydantic/deprecated/class_validators.py", line 228, in root_validator raise PydanticUserError( pydantic.errors.PydanticUserError: If you use @root_validator with pre=False (the default) you MUST specify skip_on_failure=True. Note that @root_validator is deprecated and should be replaced with @model_validator.

For further information visit https://errors.pydantic.dev/2.0.2/u/root-validator-pre-skip

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/wsx/Projects/Chinese-Vicuna/finetune.py", line 16, in from transformers import LlamaForCausalLM, LlamaTokenizer File "", line 1055, in _handle_fromlist File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1137, in getattr value = getattr(module, name) File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1136, in getattr module = self._get_module(self._class_to_module[name]) File "/home/wsx/anaconda3/envs/ChVicuna/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1148, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback): If you use @root_validator with pre=False (the default) you MUST specify skip_on_failure=True. Note that @root_validator is deprecated and should be replaced with @model_validator.

Orangeices commented 1 year ago

pip install pydantic==1.10.7