THUDM / VisualGLM-6B

Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型
Apache License 2.0
4.08k stars 415 forks source link

按照README进行微调报错 #157

Closed mMrBun closed 1 year ago

mMrBun commented 1 year ago

环境 CentOS,4090 拉代码,安装依赖,解压样例图,运行微调shell脚本之后下模型,模型下载完毕之后报Tokenizer的错误,错误信息如下 Traceback (most recent call last): File "/data/project/VisualGLM-6B/finetune_visualglm.py", line 196, in training_main(args, model_cls=model, forward_step_function=forward_step, create_dataset_function=create_dataset_function, collate_fn=data_collator) File "/opt/anaconda3/envs/visualglm/lib/python3.9/site-packages/sat/training/deepspeed_training.py", line 67, in training_main train_data, val_data, test_data = make_loaders(args, hooks['create_dataset_function'], collate_fn=collate_fn) File "/opt/anaconda3/envs/visualglm/lib/python3.9/site-packages/sat/data_utils/configure_data.py", line 197, in make_loaders train = make_dataset(**data_set_args, args=args, dataset_weights=args.train_data_weights, is_train_data=True) File "/opt/anaconda3/envs/visualglm/lib/python3.9/site-packages/sat/data_utils/configure_data.py", line 124, in make_dataset_full d = create_dataset_function(p, args) File "/data/project/VisualGLM-6B/finetune_visualglm.py", line 160, in create_dataset_function tokenizer = get_tokenizer(args) File "/opt/anaconda3/envs/visualglm/lib/python3.9/site-packages/sat/tokenization/init.py", line 76, in get_tokenizer get_tokenizer.tokenizer = AutoTokenizer.from_pretrained(tokenizer_type, trust_remote_code=True) File "/opt/anaconda3/envs/visualglm/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 719, in from_pretrained raise ValueError( ValueError: Unrecognized configuration class <class 'transformers_modules.THUDM.chatglm-6b.a70fe6b0a3cf1675b3aec07e3b7bb7a8ce73c6ae.configuration_chatglm.ChatGLMConfig'> to build an AutoTokenizer. Model type should be one of AlbertConfig, AlignConfig, BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BlipConfig, Blip2Config, BloomConfig, BridgeTowerConfig, CamembertConfig, CanineConfig, ChineseCLIPConfig, ClapConfig, CLIPConfig, CLIPSegConfig, CodeGenConfig, ConvBertConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DebertaConfig, DebertaV2Config, DistilBertConfig, DPRConfig, ElectraConfig, ErnieConfig, ErnieMConfig, EsmConfig, FlaubertConfig, FNetConfig, FSMTConfig, FunnelConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, GPTSanJapaneseConfig, GroupViTConfig, HubertConfig, IBertConfig, JukeboxConfig, LayoutLMConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LiltConfig, LlamaConfig, LongformerConfig, LongT5Config, LukeConfig, LxmertConfig, M2M100Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MgpstrConfig, MobileBertConfig, MPNetConfig, MT5Config, MvpConfig, NezhaConfig, NllbMoeConfig, NystromformerConfig, OneFormerConfig, OpenAIGPTConfig, OPTConfig, OwlViTConfig, PegasusConfig, PegasusXConfig, PerceiverConfig, Pix2StructConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, RagConfig, RealmConfig, ReformerConfig, RemBertConfig, RetriBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2TextConfig, Speech2Text2Config, SpeechT5Config, SplinterConfig, SqueezeBertConfig, SwitchTransformersConfig, T5Config, TapasConfig, TransfoXLConfig, ViltConfig, VisualBertConfig, Wav2Vec2Config, Wav2Vec2ConformerConfig, WhisperConfig, XCLIPConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig, YosoConfig. [2023-06-30 06:49:15,862] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 3119042 [2023-06-30 06:49:15,863] [ERROR] [launch.py:321:sigkill_handler] ['/opt/anaconda3/envs/visualglm/bin/python', '-u', 'finetune_visualglm.py', '--local_rank=0', '--experiment-name', 'finetune-visualglm-6b', '--model-parallel-size', '1', '--mode', 'finetune', '--train-iters', '300', '--resume-dataloader', '--max_source_length', '64', '--max_target_length', '256', '--lora_rank', '10', '--layer_range', '0', '14', '--pre_seq_len', '4', '--train-data', './fewshot-data/dataset.json', '--valid-data', './fewshot-data/dataset.json', '--distributed-backend', 'nccl', '--lr-decay-style', 'cosine', '--warmup', '.02', '--checkpoint-activations', '--save-interval', '300', '--eval-interval', '10000', '--save', './checkpoints', '--split', '1', '--eval-iters', '10', '--eval-batch-size', '8', '--zero-stage', '1', '--lr', '0.0001', '--batch-size', '4', '--skip-init', '--fp16', '--use_lora'] exits with return code = 1

transformers版本4.30.2 尝试降低到4.27.1以及4.28.1都会报错,错误信息一致

buzhihuoyefeng commented 1 year ago

我也遇到这个报错了,可能是json配置文件问题?

mMrBun commented 1 year ago

我也遇到这个报错了,可能是json配置文件问题?

不知道,第一次搞这个图像的,不太了解,我看跟我类似的一个issue,他把hf的分词器放到哪里不知道就好了,不太懂。

zyx423 commented 1 year ago

我也遇到这个问题了,请问你们解决了吗?

mMrBun commented 1 year ago

我也遇到这个问题了,请问你们解决了吗?

没有解决,不知道怎么搞,现在换成别的模型了

1049451037 commented 1 year ago

这应该是huggingface的tokenizer的问题,尝试删掉~/.cache/huggingface再跑一下这几句:

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('THUDM/chatglm-6b', trust_remote_code=True)

看一下能正常运行吗?不能的话看一下本地网络能不能连上huggingface。

zyx423 commented 1 year ago

这应该是huggingface的tokenizer的问题,尝试删掉~/.cache/huggingface再跑一下这几句:

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('THUDM/chatglm-6b', trust_remote_code=True)

看一下能正常运行吗?不能的话看一下本地网络能不能连上huggingface。

这个微调时候必须要机器能连接上huggingface吗?我们的服务器时不能访问外网的。

1049451037 commented 1 year ago

不是必须的,如果无法联网就从huggingface上下载tokenizer相关的文件到本地:

https://huggingface.co/THUDM/chatglm-6b

然后把tokenizer的THUDM/chatglm-6b替换成本地路径就可以了

1049451037 commented 1 year ago

另外,可以尝试安装github版的huggingface/transformers:

pip install git+https://github.com/huggingface/transformers
mMrBun commented 1 year ago

谢谢,删掉之后重新执行可以了