modelscope / ms-swift

Use PEFT or Full-parameter to finetune 400+ LLMs or 100+ MLLMs. (LLM: Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, Gemma2, ...; MLLM: Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL2, MiniCPM-V-2.6, GLM4v, Xcomposer2.5, Yi-VL, DeepSeek-VL, Phi3.5-Vision, ...)
https://swift.readthedocs.io/zh-cn/latest/Instruction/index.html
Apache License 2.0
4.13k stars 367 forks source link

微调量化后的glm-4v-9b模型时报错 #1425

Closed MisakaMikoto-o closed 3 months ago

MisakaMikoto-o commented 3 months ago

CUDA_VISIBLE_DEVICES=0 swift sft --model_type glm4v-9b-chat --model_id_or_path /content/glm-4v-9b-4-bits --dataset /content/drive/MyDrive/glm/training_data.jsonl --output_dir /content/drive/MyDrive/glm/output 使用上面命令微调量化后的glm-4v-9b模型时报错:

run sh: `python /content/drive/MyDrive/glm/swift/swift/cli/sft.py --model_type glm4v-9b-chat --model_id_or_path /content/glm-4v-9b-4-bits --dataset /content/drive/MyDrive/glm/training_data.jsonl --output_dir /content/drive/MyDrive/glm/output`
2024-07-17 07:54:49.258696: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-07-17 07:54:49.311796: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-07-17 07:54:49.311855: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-07-17 07:54:49.313459: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-07-17 07:54:49.321615: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-07-17 07:54:50.478363: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
[INFO:swift] Successfully registered `/content/drive/MyDrive/glm/swift/swift/llm/data/dataset_info.json`
[INFO:swift] Start time of running main: 2024-07-17 07:54:52.043502
[INFO:swift] Setting template_type: glm4v
[INFO:swift] Setting args.lazy_tokenize: True
[INFO:swift] Setting args.dataloader_num_workers: 1
[INFO:swift] output_dir: /content/drive/MyDrive/glm/output/glm4v-9b-chat/v15-20240717-075452
[INFO:swift] args: SftArguments(model_type='glm4v-9b-chat', model_id_or_path='/content/glm-4v-9b-4-bits', model_revision='master', sft_type='lora', freeze_parameters=0.0, additional_trainable_parameters=[], tuner_backend='peft', template_type='glm4v', output_dir='/content/drive/MyDrive/glm/output/glm4v-9b-chat/v15-20240717-075452', add_output_dir_suffix=True, ddp_backend=None, ddp_find_unused_parameters=None, ddp_broadcast_buffers=None, seed=42, resume_from_checkpoint=None, resume_only_model=False, ignore_data_skip=False, dtype='bf16', packing=False, dataset=['/content/drive/MyDrive/glm/training_data.jsonl'], val_dataset=[], dataset_seed=42, dataset_test_ratio=0.01, use_loss_scale=False, loss_scale_config_path='/content/drive/MyDrive/glm/swift/swift/llm/agent/default_loss_scale_config.json', system=None, tools_prompt='react_en', max_length=2048, truncation_strategy='delete', check_dataset_strategy='none', model_name=[None, None], model_author=[None, None], quant_method=None, quantization_bit=0, hqq_axis=0, hqq_dynamic_config_path=None, bnb_4bit_comp_dtype='bf16', bnb_4bit_quant_type='nf4', bnb_4bit_use_double_quant=True, bnb_4bit_quant_storage=None, lora_target_modules=['self_attention.query_key_value'], lora_rank=8, lora_alpha=32, lora_dropout_p=0.05, lora_bias_trainable='none', lora_modules_to_save=[], lora_dtype='AUTO', lora_lr_ratio=None, use_rslora=False, use_dora=False, init_lora_weights='true', rope_scaling=None, boft_block_size=4, boft_block_num=0, boft_n_butterfly_factor=1, boft_target_modules=['DEFAULT'], boft_dropout=0.0, boft_modules_to_save=[], vera_rank=256, vera_target_modules=['DEFAULT'], vera_projection_prng_key=0, vera_dropout=0.0, vera_d_initial=0.1, vera_modules_to_save=[], adapter_act='gelu', adapter_length=128, use_galore=False, galore_rank=128, galore_target_modules=None, galore_update_proj_gap=50, galore_scale=1.0, galore_proj_type='std', galore_optim_per_parameter=False, galore_with_embedding=False, adalora_target_r=8, adalora_init_r=12, adalora_tinit=0, adalora_tfinal=0, adalora_deltaT=1, adalora_beta1=0.85, adalora_beta2=0.85, adalora_orth_reg_weight=0.5, ia3_target_modules=['DEFAULT'], ia3_feedforward_modules=[], ia3_modules_to_save=[], llamapro_num_new_blocks=4, llamapro_num_groups=None, neftune_noise_alpha=None, neftune_backend='transformers', lisa_activated_layers=0, lisa_step_interval=20, gradient_checkpointing=True, deepspeed=None, batch_size=1, eval_batch_size=1, num_train_epochs=1, max_steps=-1, optim='adamw_torch', adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, learning_rate=0.0001, weight_decay=0.1, gradient_accumulation_steps=16, max_grad_norm=0.5, predict_with_generate=False, lr_scheduler_type='cosine', lr_scheduler_kwargs={}, warmup_ratio=0.05, warmup_steps=0, eval_steps=50, save_steps=50, save_only_model=False, save_total_limit=2, logging_steps=5, acc_steps=1, dataloader_num_workers=1, dataloader_pin_memory=True, dataloader_drop_last=False, push_to_hub=False, hub_model_id=None, hub_token=None, hub_private_repo=False, push_hub_strategy='push_best', test_oom_error=False, disable_tqdm=False, lazy_tokenize=True, preprocess_num_proc=1, use_flash_attn=None, ignore_args_error=False, check_model_is_latest=True, logging_dir='/content/drive/MyDrive/glm/output/glm4v-9b-chat/v15-20240717-075452/runs', report_to=['tensorboard'], acc_strategy='token', save_on_each_node=True, evaluation_strategy='steps', save_strategy='steps', save_safetensors=True, gpu_memory_fraction=None, include_num_input_tokens_seen=False, local_repo_path=None, custom_register_path=None, custom_dataset_info=None, device_map_config_path=None, max_new_tokens=2048, do_sample=True, temperature=0.3, top_k=20, top_p=0.7, repetition_penalty=1.0, num_beams=1, fsdp='', fsdp_config=None, sequence_parallel_size=1, model_layer_cls_name=None, metric_warmup_step=0, fsdp_num=1, per_device_train_batch_size=None, per_device_eval_batch_size=None, eval_strategy=None, self_cognition_sample=0, train_dataset_mix_ratio=0.0, train_dataset_mix_ds=['ms-bench'], train_dataset_sample=-1, val_dataset_sample=None, safe_serialization=None, only_save_model=None, neftune_alpha=None, deepspeed_config_path=None, model_cache_dir=None, custom_train_dataset_path=[], custom_val_dataset_path=[])
[INFO:swift] Global seed set to 42
device_count: 1
rank: -1, local_rank: -1, world_size: 1, local_world_size: 1
[INFO:swift] Loading the model using model_dir: /content/glm-4v-9b-4-bits
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[INFO:swift] model_kwargs: {'low_cpu_mem_usage': True, 'device_map': 'cuda:0'}
Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>.
Loading checkpoint shards: 100% 2/2 [00:04<00:00,  2.23s/it]
[INFO:swift] model.max_model_len: 8192
[INFO:swift] model_config: ChatGLMConfig {
  "_name_or_path": "/content/glm-4v-9b-4-bits",
  "add_bias_linear": false,
  "add_qkv_bias": true,
  "apply_query_key_layer_scaling": true,
  "apply_residual_connection_post_layernorm": false,
  "architectures": [
    "ChatGLMForConditionalGeneration"
  ],
  "attention_dropout": 0.0,
  "attention_softmax_in_fp32": true,
  "auto_map": {
    "AutoConfig": "configuration_chatglm.ChatGLMConfig",
    "AutoModel": "THUDM/glm-4v-9b--modeling_chatglm.ChatGLMForConditionalGeneration",
    "AutoModelForCausalLM": "modeling_chatglm.ChatGLMForConditionalGeneration",
    "AutoModelForSeq2SeqLM": "THUDM/glm-4v-9b--modeling_chatglm.ChatGLMForConditionalGeneration",
    "AutoModelForSequenceClassification": "THUDM/glm-4v-9b--modeling_chatglm.ChatGLMForSequenceClassification"
  },
  "bias_dropout_fusion": true,
  "boi_token_id": 151339,
  "classifier_dropout": null,
  "eoi_token_id": 151340,
  "eos_token_id": [
    151329,
    151336,
    151338
  ],
  "ffn_hidden_size": 13696,
  "fp32_residual_connection": false,
  "hidden_dropout": 0.0,
  "hidden_size": 4096,
  "kv_channels": 128,
  "layernorm_epsilon": 1.5625e-07,
  "model_type": "chatglm",
  "multi_query_attention": true,
  "multi_query_group_num": 2,
  "num_attention_heads": 32,
  "num_layers": 40,
  "original_rope": true,
  "pad_token_id": 151329,
  "padded_vocab_size": 151552,
  "post_layer_norm": true,
  "pre_seq_len": null,
  "prefix_projection": false,
  "quantization_config": {
    "_load_in_4bit": true,
    "_load_in_8bit": false,
    "bnb_4bit_compute_dtype": "float16",
    "bnb_4bit_quant_storage": "uint8",
    "bnb_4bit_quant_type": "nf4",
    "bnb_4bit_use_double_quant": false,
    "llm_int8_enable_fp32_cpu_offload": false,
    "llm_int8_has_fp16_weight": false,
    "llm_int8_skip_modules": null,
    "llm_int8_threshold": 6.0,
    "load_in_4bit": true,
    "load_in_8bit": false,
    "quant_method": "bitsandbytes"
  },
  "rmsnorm": true,
  "rope_ratio": 1,
  "seq_length": 8192,
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": "4.41.2",
  "use_cache": true,
  "vision_config": {
    "dropout_prob": 0.0,
    "hidden_act": "gelu",
    "hidden_size": 1792,
    "image_size": 1120,
    "in_channels": 3,
    "intermediate_size": 15360,
    "layer_norm_eps": 1e-06,
    "num_heads": 16,
    "num_hidden_layers": 63,
    "num_positions": 6401,
    "patch_size": 14,
    "scaling_factor": 8
  },
  "vocab_size": 151552
}

[INFO:swift] generation_config: GenerationConfig {
  "do_sample": true,
  "eos_token_id": 151329,
  "max_new_tokens": 2048,
  "pad_token_id": 151329,
  "temperature": 0.3,
  "top_k": 20,
  "top_p": 0.7
}

[INFO:swift] lora_target_modules: ['self_attention.query_key_value']
[INFO:swift] lora_modules_to_save: []
[INFO:swift] lora_config: get_wrapped_class.<locals>.PeftWrapper(peft_type=<PeftType.LORA: 'LORA'>, auto_mapping=None, base_model_name_or_path='/content/glm-4v-9b-4-bits', revision=None, task_type='CAUSAL_LM', inference_mode=False, r=8, target_modules={'self_attention.query_key_value'}, lora_alpha=32, lora_dropout=0.05, fan_in_fan_out=False, bias='none', use_rslora=False, modules_to_save=[], init_lora_weights=True, layers_to_transform=None, layers_pattern=None, rank_pattern={}, alpha_pattern={}, megatron_config=None, megatron_core='megatron.core', loftq_config={}, use_dora=False, layer_replication=None, lora_dtype=None, lorap_lr_ratio=None, lorap_emb_lr=1e-06)
[INFO:swift] [base_model.model.transformer.embedding.word_embeddings.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.0.input_layernorm.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.0.self_attention.query_key_value.base_layer.weight]: requires_grad=False, dtype=torch.uint8, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.0.self_attention.query_key_value.base_layer.bias]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.0.self_attention.query_key_value.lora_A.default.weight]: requires_grad=True, dtype=torch.float32, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.0.self_attention.query_key_value.lora_B.default.weight]: requires_grad=True, dtype=torch.float32, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.0.self_attention.dense.weight]: requires_grad=False, dtype=torch.uint8, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.0.post_attention_layernorm.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.0.mlp.dense_h_to_4h.weight]: requires_grad=False, dtype=torch.uint8, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.0.mlp.dense_4h_to_h.weight]: requires_grad=False, dtype=torch.uint8, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.1.input_layernorm.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.1.self_attention.query_key_value.base_layer.weight]: requires_grad=False, dtype=torch.uint8, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.1.self_attention.query_key_value.base_layer.bias]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.1.self_attention.query_key_value.lora_A.default.weight]: requires_grad=True, dtype=torch.float32, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.1.self_attention.query_key_value.lora_B.default.weight]: requires_grad=True, dtype=torch.float32, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.1.self_attention.dense.weight]: requires_grad=False, dtype=torch.uint8, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.1.post_attention_layernorm.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.1.mlp.dense_h_to_4h.weight]: requires_grad=False, dtype=torch.uint8, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.1.mlp.dense_4h_to_h.weight]: requires_grad=False, dtype=torch.uint8, device=cuda:0
[INFO:swift] [base_model.model.transformer.encoder.layers.2.input_layernorm.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0
[INFO:swift] ...
[INFO:swift] PeftModelForCausalLM(
  (base_model): LoraModel(
    (model): ChatGLMForConditionalGeneration(
      (transformer): ChatGLMModel(
        (embedding): Embedding(
          (word_embeddings): Embedding(151552, 4096)
        )
        (rotary_pos_emb): RotaryEmbedding()
        (encoder): GLMTransformer(
          (layers): ModuleList(
            (0-39): 40 x GLMBlock(
              (input_layernorm): RMSNorm()
              (self_attention): SelfAttention(
                (query_key_value): lora.Linear4bit(
                  (base_layer): Linear4bit(in_features=4096, out_features=4608, bias=True)
                  (lora_dropout): ModuleDict(
                    (default): Dropout(p=0.05, inplace=False)
                  )
                  (lora_A): ModuleDict(
                    (default): Linear(in_features=4096, out_features=8, bias=False)
                  )
                  (lora_B): ModuleDict(
                    (default): Linear(in_features=8, out_features=4608, bias=False)
                  )
                  (lora_embedding_A): ParameterDict()
                  (lora_embedding_B): ParameterDict()
                )
                (core_attention): CoreAttention(
                  (attention_dropout): Dropout(p=0.0, inplace=False)
                )
                (dense): Linear4bit(in_features=4096, out_features=4096, bias=False)
              )
              (post_attention_layernorm): RMSNorm()
              (mlp): MLP(
                (dense_h_to_4h): Linear4bit(in_features=4096, out_features=27392, bias=False)
                (dense_4h_to_h): Linear4bit(in_features=13696, out_features=4096, bias=False)
              )
            )
          )
          (final_layernorm): RMSNorm()
        )
        (output_layer): Linear4bit(in_features=4096, out_features=151552, bias=False)
        (vision): EVA2CLIPModel(
          (patch_embedding): PatchEmbedding(
            (proj): Conv2d(3, 1792, kernel_size=(14, 14), stride=(14, 14))
            (position_embedding): Embedding(6401, 1792)
          )
          (transformer): Transformer(
            (layers): ModuleList(
              (0-62): 63 x TransformerLayer(
                (input_layernorm): LayerNorm((1792,), eps=1e-06, elementwise_affine=True)
                (attention): Attention(
                  (query_key_value): Linear4bit(in_features=1792, out_features=5376, bias=True)
                  (dense): Linear4bit(in_features=1792, out_features=1792, bias=True)
                  (output_dropout): Dropout(p=0.0, inplace=False)
                )
                (mlp): MLP(
                  (activation_fn): GELUActivation()
                  (fc1): Linear4bit(in_features=1792, out_features=15360, bias=True)
                  (fc2): Linear4bit(in_features=15360, out_features=1792, bias=True)
                )
                (post_attention_layernorm): LayerNorm((1792,), eps=1e-06, elementwise_affine=True)
              )
            )
          )
          (linear_proj): GLU(
            (linear_proj): Linear4bit(in_features=4096, out_features=4096, bias=False)
            (norm1): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
            (act1): GELU(approximate='none')
            (dense_h_to_4h): Linear4bit(in_features=4096, out_features=13696, bias=False)
            (gate_proj): Linear4bit(in_features=4096, out_features=13696, bias=False)
            (dense_4h_to_h): Linear4bit(in_features=13696, out_features=4096, bias=False)
          )
          (conv): Conv2d(1792, 4096, kernel_size=(2, 2), stride=(2, 2))
        )
      )
    )
  )
)
[INFO:swift] PeftModelForCausalLM: 7288.5284M Params (2.7853M Trainable [0.0382%]), 0.0000M Buffers.
[INFO:swift] Setting model.config.use_cache: False
[INFO:swift] train_dataset: Dataset({
    features: ['query', 'response', 'images'],
    num_rows: 40
})
[INFO:swift] val_dataset: Dataset({
    features: ['query', 'response', 'images'],
    num_rows: 1
})
[INFO:swift] system: None
[INFO:swift] args.lazy_tokenize: True
[INFO:swift] [INPUT_IDS] [151331, 151333, 151336, 198, 151339, 151329, 151340, 111000, 100959, 99098, 104494, 101010, 106709, 151337, 108792, 98421, 5373, 111084, 98370, 98387, 151329]
[INFO:swift] [INPUT] [gMASK] <sop> <|user|> 
 <|begin_of_image|> <|endoftext|> <|end_of_image|> 图中哪些内容带有免费标识 <|assistant|> 焚情、有一点动心 <|endoftext|>
[INFO:swift] [LABLES_IDS] [-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 108792, 98421, 5373, 111084, 98370, 98387, 151329]
[INFO:swift] [LABLES] [-100 * 14]焚情、有一点动心 <|endoftext|>
[INFO:swift] training_args: Seq2SeqTrainingArguments(
_n_gpu=1,
acc_strategy=token,
accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None},
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
additional_saved_files=[],
auto_find_batch_size=False,
batch_eval_metrics=False,
bf16=True,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=1,
dataloader_persistent_workers=False,
dataloader_pin_memory=True,
dataloader_prefetch_factor=None,
ddp_backend=None,
ddp_broadcast_buffers=None,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
ddp_timeout=1800,
debug=[],
deepspeed=None,
disable_tqdm=False,
dispatch_batches=None,
do_eval=True,
do_predict=False,
do_train=False,
eval_accumulation_steps=None,
eval_delay=0,
eval_do_concat_batches=True,
eval_steps=50,
eval_strategy=steps,
evaluation_strategy=None,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False},
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
generation_config=GenerationConfig {
  "do_sample": true,
  "eos_token_id": 151329,
  "max_new_tokens": 2048,
  "pad_token_id": 151329,
  "temperature": 0.3,
  "top_k": 20,
  "top_p": 0.7
}
,
generation_max_length=None,
generation_num_beams=None,
gradient_accumulation_steps=16,
gradient_checkpointing=True,
gradient_checkpointing_kwargs=None,
greater_is_better=False,
group_by_length=False,
half_precision_backend=auto,
hub_always_push=False,
hub_model_id=None,
hub_private_repo=False,
hub_strategy=every_save,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_inputs_for_metrics=False,
include_num_input_tokens_seen=False,
include_tokens_per_second=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=0.0001,
length_column_name=length,
load_best_model_at_end=False,
local_rank=0,
log_level=passive,
log_level_replica=warning,
log_on_each_node=True,
logging_dir=/content/drive/MyDrive/glm/output/glm4v-9b-chat/v15-20240717-075452/runs,
logging_first_step=True,
logging_nan_inf_filter=True,
logging_steps=5,
logging_strategy=steps,
lr_scheduler_kwargs={},
lr_scheduler_type=cosine,
max_grad_norm=0.5,
max_steps=-1,
metric_for_best_model=loss,
metric_warmup_step=0,
mp_parameters=,
neftune_noise_alpha=None,
no_cuda=False,
num_train_epochs=1,
optim=adamw_torch,
optim_args=None,
optim_target_modules=None,
output_dir=/content/drive/MyDrive/glm/output/glm4v-9b-chat/v15-20240717-075452,
overwrite_output_dir=False,
past_index=-1,
per_device_eval_batch_size=1,
per_device_train_batch_size=1,
predict_with_generate=False,
prediction_loss_only=False,
push_hub_strategy=push_best,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
ray_scope=last,
remove_unused_columns=False,
report_to=['tensorboard'],
restore_callback_states_from_checkpoint=False,
resume_from_checkpoint=None,
run_name=/content/drive/MyDrive/glm/output/glm4v-9b-chat/v15-20240717-075452,
save_on_each_node=True,
save_only_model=False,
save_safetensors=True,
save_steps=50,
save_strategy=steps,
save_total_limit=2,
seed=42,
skip_memory_metrics=True,
sortish_sampler=True,
split_batches=None,
tf32=None,
torch_compile=False,
torch_compile_backend=None,
torch_compile_mode=None,
torchdynamo=None,
tpu_metrics_debug=False,
tpu_num_cores=None,
train_dataset_sample=40,
train_sampler_random=True,
use_cpu=False,
use_ipex=False,
use_legacy_prediction_loop=False,
use_mps_device=False,
warmup_ratio=0.05,
warmup_steps=0,
weight_decay=0.1,
)
[INFO:swift] The SftArguments will be saved in: /content/drive/MyDrive/glm/output/glm4v-9b-chat/v15-20240717-075452/sft_args.json
[INFO:swift] The Seq2SeqTrainingArguments will be saved in: /content/drive/MyDrive/glm/output/glm4v-9b-chat/v15-20240717-075452/training_args.json
[INFO:swift] The logging file will be saved in: /content/drive/MyDrive/glm/output/glm4v-9b-chat/v15-20240717-075452/logging.jsonl
Train:   0% 0/2 [00:00<?, ?it/s]/usr/lib/python3.10/multiprocessing/popen_fork.py:66: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.
  self.pid = os.fork()
Traceback (most recent call last):
  File "/content/drive/MyDrive/glm/swift/swift/cli/sft.py", line 5, in <module>
    sft_main()
  File "/content/drive/MyDrive/glm/swift/swift/utils/run_utils.py", line 27, in x_main
    result = llm_x(args, **kwargs)
  File "/content/drive/MyDrive/glm/swift/swift/llm/sft.py", line 317, in llm_sft
    trainer.train(training_args.resume_from_checkpoint)
  File "/content/drive/MyDrive/glm/swift/swift/trainers/mixin.py", line 518, in train
    res = super().train(resume_from_checkpoint, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 1885, in train
    return inner_training_loop(
  File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2216, in _inner_training_loop
    tr_loss_step = self.training_step(model, inputs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 3238, in training_step
    loss = self.compute_loss(model, inputs)
  File "/content/drive/MyDrive/glm/swift/swift/trainers/trainers.py", line 183, in compute_loss
    outputs = model(**inputs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/operations.py", line 819, in forward
    return model_forward(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/operations.py", line 807, in __call__
    return convert_to_fp32(self.model_forward(*args, **kwargs))
  File "/usr/local/lib/python3.10/dist-packages/torch/amp/autocast_mode.py", line 16, in decorate_autocast
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/peft/peft_model.py", line 1430, in forward
    return self.base_model(
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/peft/tuners/tuners_utils.py", line 179, in forward
    return self.model.forward(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/accelerate/hooks.py", line 169, in new_forward
    output = module._old_forward(*args, **kwargs)
  File "/root/.cache/huggingface/modules/transformers_modules/glm-4v-9b-4-bits/modeling_chatglm.py", line 1038, in forward
    loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/content/drive/MyDrive/glm/swift/swift/llm/utils/model.py", line 1540, in cross_entropy_forward
    return __old_forward(self, inputs, target)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/loss.py", line 1185, in forward
    return F.cross_entropy(input, target, weight=self.weight,
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 3086, in cross_entropy
    return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
ValueError: Expected input batch_size (1691) to match target batch_size (92).
Train:   0% 0/2 [00:02<?, ?it/s]

推理没有问题

tastelikefeet commented 3 months ago

量化的模型来自于哪里?

MisakaMikoto-o commented 3 months ago

量化的模型来自于哪里?

https://www.modelscope.cn/models/Xok135/glm-4v-9b-4-bits 这个

tectal commented 3 months ago

我也报错了

$ E:/ygf/swift/uav/GLM4V_sft.sh
run sh: python C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\swift \cli\sft.py --model_type glm4v-9b-chat --model_id_or_path E:/ygf/swift/GLM-4V-9B -chat --sft_type lora --tuner_backend peft --template_type AUTO --dtype AUTO --o utput_dir E:/ygf/swift/output --dataset E:/ygf/swift/uav/Yi_1v1.json --train_dat aset_sample -1 --num_train_epochs 1 --max_length 2048 --check_dataset_strategy w arning --lora_rank 8 --lora_alpha 32 --lora_dropout_p 0.05 --lora_target_modules DEFAULT --gradient_checkpointing true --batch_size 1 --weight_decay 0.1 --learn ing_rate 1e-4 --gradient_accumulation_steps 16 --max_grad_norm 0.5 --warmup_rati o 0.03 --eval_steps 100 --save_steps 100 --save_total_limit 2 --logging_steps 10 0 --use_flash_attn false [INFO:swift] Successfully registered C:\Users\jxny02\anaconda3\envs\ygf_swift\L ib\site-packages\swift\llm\data\dataset_info.json [INFO:swift] Start time of running main: 2024-07-22 20:11:46.089474 [INFO:swift] Setting template_type: glm4v [INFO:swift] Setting args.lazy_tokenize: True
[INFO:swift] Setting args.dataloader_num_workers: 0 [INFO:swift] output_dir: E:\ygf\swift\output\glm4v-9b-chat\v0-20240722-201146 [INFO:swift] args: SftArguments(model_type='glm4v-9b-chat', model_id_or_path='E: \ygf\swift\GLM-4V-9B-chat', model_revision='master', sfttype='lora', freeze parameters=0.0, additional_trainable_parameters=[], tuner_backend='peft', templa te_type='glm4v', output_dir='E:\ygf\swift\output\glm4v-9b-chat\v0-20240722- 201146', add_output_dir_suffix=True, ddp_backend=None, ddp_find_unused_parameter s=None, ddp_broadcast_buffers=None, seed=42, resume_from_checkpoint=None, resume _only_model=False, ignore_data_skip=False, dtype='bf16', packing=False, dataset= ['E:/ygf/swift/uav/Yi_1v1.json'], val_dataset=[], dataset_seed=42, datasettest ratio=0.01, use_loss_scale=False, loss_scale_config_path='C:\Users\jxny02\ana conda3\envs\ygf_swift\Lib\site-packages\swift\llm\agent\default_loss_sca le_config.json', system=None, tools_prompt='react_en', max_length=2048, truncati on_strategy='delete', check_dataset_strategy='warning', model_name=[None, None], model_author=[None, None], quant_method=None, quantization_bit=0, hqq_axis=0, h qq_dynamic_config_path=None, bnb_4bit_comp_dtype='bf16', bnb_4bit_quant_type='nf 4', bnb_4bit_use_double_quant=True, bnb_4bit_quant_storage=None, lora_target_mod ules=['self_attention.query_key_value'], lora_rank=8, lora_alpha=32, lora_dropou t_p=0.05, lora_bias_trainable='none', lora_modules_to_save=[], lora_dtype='AUTO' , lora_lr_ratio=None, use_rslora=False, use_dora=False, init_lora_weights='true' , rope_scaling=None, boft_block_size=4, boft_block_num=0, boft_n_butterfly_facto r=1, boft_target_modules=['DEFAULT'], boft_dropout=0.0, boft_modules_to_save=[], vera_rank=256, vera_target_modules=['DEFAULT'], vera_projection_prng_key=0, ver a_dropout=0.0, vera_d_initial=0.1, vera_modules_to_save=[], adapter_act='gelu', adapter_length=128, use_galore=False, galore_rank=128, galore_target_modules=Non e, galore_update_proj_gap=50, galore_scale=1.0, galore_proj_type='std', galore_o ptim_per_parameter=False, galore_with_embedding=False, adalora_target_r=8, adalo ra_init_r=12, adalora_tinit=0, adalora_tfinal=0, adalora_deltaT=1, adalora_beta1 =0.85, adalora_beta2=0.85, adalora_orth_reg_weight=0.5, ia3_target_modules=['DEF AULT'], ia3_feedforward_modules=[], ia3_modules_to_save=[], llamapro_num_new_blo cks=4, llamapro_num_groups=None, neftune_noise_alpha=None, neftune_backend='tran sformers', lisa_activated_layers=0, lisa_step_interval=20, gradient_checkpointin g=True, deepspeed=None, batch_size=1, eval_batch_size=1, num_train_epochs=1, max _steps=-1, optim='adamw_torch', adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1 e-08, learning_rate=0.0001, weight_decay=0.1, gradient_accumulation_steps=16, ma x_grad_norm=0.5, predict_with_generate=False, lr_scheduler_type='linear', warmup _ratio=0.03, eval_steps=100, save_steps=100, save_only_model=False, save_total_l imit=2, logging_steps=100, dataloader_num_workers=0, dataloader_pin_memory=True, dataloader_drop_last=False, push_to_hub=False, hub_model_id=None, hub_token=Non e, hub_private_repo=False, push_hub_strategy='push_best', test_oom_error=False, disable_tqdm=False, lazy_tokenize=True, preprocess_num_proc=1, use_flash_attn=Fa lse, ignore_args_error=False, check_model_is_latest=True, logging_dir='E:\ygf\ swift\output\glm4v-9b-chat\v0-20240722-201146/runs', report_to=['tensorboard' ], acc_strategy='token', save_on_each_node=True, evaluation_strategy='steps', sa ve_strategy='steps', save_safetensors=True, gpu_memory_fraction=None, include_nu m_input_tokens_seen=False, local_repo_path=None, custom_register_path=None, cust om_dataset_info=None, device_map_config_path=None, max_new_tokens=2048, do_sampl e=True, temperature=0.3, top_k=20, top_p=0.7, repetition_penalty=1.0, num_beams= 1, fsdp='', fsdp_config=None, sequence_parallel_size=1, model_layer_cls_name=Non e, metric_warmup_step=0, fsdp_num=1, per_device_train_batch_size=None, per_devic e_eval_batch_size=None, eval_strategy=None, self_cognition_sample=0, train_datas et_mix_ratio=0.0, train_dataset_mix_ds=['ms-bench'], train_dataset_sample=-1, va l_dataset_sample=None, safe_serialization=None, only_save_model=None, neftune_al pha=None, deepspeed_config_path=None, model_cache_dir=None, custom_train_dataset _path=[], custom_val_dataset_path=[]) [INFO:swift] Global seed set to 42 device_count: 1 rank: -1, local_rank: -1, world_size: 1, local_world_size: 1 [INFO:swift] Loading the model using model_dir: E:\ygf\swift\GLM-4V-9B-chat
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Loading checkpoint shards: 100%|███████████████| 15/15 [02:34<00:00, 10.33s/it] [INFO:swift] model.max_model_len: 8192 [INFO:swift] model_config: ChatGLMConfig { "_name_or_path": "E:\ygf\swift\GLM-4V-9B-chat", "add_bias_linear": false, "add_qkv_bias": true, "apply_query_key_layer_scaling": true, "apply_residual_connection_post_layernorm": false, "architectures": [ "ChatGLMModel" ], "attention_dropout": 0.0, "attention_softmax_in_fp32": true, "auto_map": { "AutoConfig": "configuration_chatglm.ChatGLMConfig", "AutoModel": "modeling_chatglm.ChatGLMForConditionalGeneration", "AutoModelForCausalLM": "modeling_chatglm.ChatGLMForConditionalGeneration", "AutoModelForSeq2SeqLM": "modeling_chatglm.ChatGLMForConditionalGeneration", "AutoModelForSequenceClassification": "modeling_chatglm.ChatGLMForSequenceCl assification" }, "bias_dropout_fusion": true, "boi_token_id": 151339, "classifier_dropout": null, "eoi_token_id": 151340, "eos_token_id": [ 151329, 151336, 151338 ], "ffn_hidden_size": 13696, "fp32_residual_connection": false, "hidden_dropout": 0.0, "hidden_size": 4096, "kv_channels": 128, "layernorm_epsilon": 1.5625e-07, "model_type": "chatglm", "multi_query_attention": true, "multi_query_group_num": 2, "num_attention_heads": 32, "num_layers": 40, "original_rope": true, "pad_token_id": 151329, "padded_vocab_size": 151552, "post_layer_norm": true, "pre_seq_len": null, "prefix_projection": false, "rmsnorm": true, "rope_ratio": 1, "seq_length": 8192, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.41.2", "use_cache": true, "vision_config": { "dropout_prob": 0.0, "hidden_act": "gelu", "hidden_size": 1792, "image_size": 1120, "in_channels": 3, "intermediate_size": 15360, "layer_norm_eps": 1e-06, "num_heads": 16, "num_hidden_layers": 63, "num_positions": 6401, "patch_size": 14, "scaling_factor": 8 }, "vocab_size": 151552 }

[INFO:swift] generation_config: GenerationConfig { "do_sample": true, "eos_token_id": 151329, "max_new_tokens": 2048, "pad_token_id": 151329, "temperature": 0.3, "top_k": 20, "top_p": 0.7 }

[INFO:swift] lora_target_modules: ['self_attention.query_key_value'] [INFO:swift] lora_modules_to_save: [] [INFO:swift] lora_config: get_wrapped_class..PeftWrapper(peft_type=<Peft Type.LORA: 'LORA'>, auto_mapping=None, base_model_name_or_path='E:\ygf\swift\ GLM-4V-9B-chat', revision=None, task_type='CAUSAL_LM', inference_mode=False, r=8 , target_modules={'self_attention.query_key_value'}, lora_alpha=32, lora_dropout =0.05, fan_in_fan_out=False, bias='none', use_rslora=False, modules_to_save=[], init_lora_weights=True, layers_to_transform=None, layers_pattern=None, rank_patt ern={}, alpha_pattern={}, megatron_config=None, megatron_core='megatron.core', l oftq_config={}, use_dora=False, layer_replication=None, lora_dtype=None, lorap_l r_ratio=None, lorap_emb_lr=1e-06)

quires_grad=False, dtype=torch.bfloat16, device=cuda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.0.input_layernorm.weig ht]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.0.self_attention.query _key_value.base_layer.weight]: requires_grad=False, dtype=torch.bfloat16, device =cuda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.0.self_attention.query _key_value.base_layer.bias]: requires_grad=False, dtype=torch.bfloat16, device=c uda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.0.self_attention.query _key_value.lora_A.default.weight]: requires_grad=True, dtype=torch.bfloat16, dev ice=cuda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.0.self_attention.query _key_value.lora_B.default.weight]: requires_grad=True, dtype=torch.bfloat16, dev ice=cuda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.0.self_attention.dense .weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.0.post_attention_layer norm.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.0.mlp.dense_h_to_4h.we ight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.0.mlp.dense_4h_to_h.we ight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.1.input_layernorm.weig ht]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.1.self_attention.query _key_value.base_layer.weight]: requires_grad=False, dtype=torch.bfloat16, device =cuda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.1.self_attention.query _key_value.base_layer.bias]: requires_grad=False, dtype=torch.bfloat16, device=c uda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.1.self_attention.query _key_value.lora_A.default.weight]: requires_grad=True, dtype=torch.bfloat16, dev ice=cuda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.1.self_attention.query _key_value.lora_B.default.weight]: requires_grad=True, dtype=torch.bfloat16, dev ice=cuda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.1.self_attention.dense .weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.1.post_attention_layer norm.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.1.mlp.dense_h_to_4h.we ight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.1.mlp.dense_4h_to_h.we ight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0 [INFO:swift] [base_model.model.transformer.encoder.layers.2.input_layernorm.weig ht]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0 [INFO:swift] ... [INFO:swift] PeftModelForCausalLM( (base_model): LoraModel( (model): ChatGLMForConditionalGeneration( (transformer): ChatGLMModel( (embedding): Embedding( (word_embeddings): Embedding(151552, 4096) ) (rotary_pos_emb): RotaryEmbedding() (encoder): GLMTransformer( (layers): ModuleList( (0-39): 40 x GLMBlock( (input_layernorm): RMSNorm() (self_attention): SelfAttention( (query_key_value): lora.Linear( (base_layer): Linear(in_features=4096, out_features=4608, bias =True) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=8, bias=Fal se) ) (lora_B): ModuleDict( (default): Linear(in_features=8, out_features=4608, bias=Fal se) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (core_attention): CoreAttention( (attention_dropout): Dropout(p=0.0, inplace=False) ) (dense): Linear(in_features=4096, out_features=4096, bias=False) ) (post_attention_layernorm): RMSNorm() (mlp): MLP( (dense_h_to_4h): Linear(in_features=4096, out_features=27392, bi as=False) (dense_4h_to_h): Linear(in_features=13696, out_features=4096, bi as=False) ) ) ) (final_layernorm): RMSNorm() ) (output_layer): Linear(in_features=4096, out_features=151552, bias=False ) (vision): EVA2CLIPModel( (patch_embedding): PatchEmbedding( (proj): Conv2d(3, 1792, kernel_size=(14, 14), stride=(14, 14))
(position_embedding): Embedding(6401, 1792) ) (transformer): Transformer( (layers): ModuleList( (0-62): 63 x TransformerLayer( (input_layernorm): LayerNorm((1792,), eps=1e-06, elementwise_aff ine=True) (attention): Attention( (query_key_value): Linear(in_features=1792, out_features=5376, bias=True) (dense): Linear(in_features=1792, out_features=1792, bias=True ) (output_dropout): Dropout(p=0.0, inplace=False) ) (mlp): MLP( (activation_fn): GELUActivation() (fc1): Linear(in_features=1792, out_features=15360, bias=True) (fc2): Linear(in_features=15360, out_features=1792, bias=True) ) (post_attention_layernorm): LayerNorm((1792,), eps=1e-06, elemen twise_affine=True) ) ) ) (linear_proj): GLU( (linear_proj): Linear(in_features=4096, out_features=4096, bias=Fals e) (norm1): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
(act1): GELU(approximate='none') (dense_h_to_4h): Linear(in_features=4096, out_features=13696, bias=F alse) (gate_proj): Linear(in_features=4096, out_features=13696, bias=False ) (dense_4h_to_h): Linear(in_features=13696, out_features=4096, bias=F alse) ) (conv): Conv2d(1792, 4096, kernel_size=(2, 2), stride=(2, 2)) ) ) ) ) ) [INFO:swift] PeftModelForCausalLM: 13909.1062M Params (2.7853M Trainable [0.0200 %]), 0.0000M Buffers. [INFO:swift] Setting model.config.use_cache: False [INFO:swift] check dataset... [INFO:swift] check_dataset_strategy: 'warning' 100%|█████████████████████████████████| 69182/69182 [00:06<00:00, 10917.65it/s] 100%|█████████████████████████████████████| 698/698 [00:00<00:00, 11636.12it/s] [INFO:swift] train_dataset: Dataset({ features: ['query', 'response', 'images'], num_rows: 69182 }) [INFO:swift] val_dataset: Dataset({ features: ['query', 'response', 'images'], num_rows: 698 }) [INFO:swift] system: None [INFO:swift] args.lazy_tokenize: True [INFO:swift] [INPUT_IDS] [151331, 151333, 151336, 198, 151339, 151329, 151340, 7 85, 274, 23121, 3867, 1992, 389, 6702, 220, 17, 11, 220, 115937, 16, 11, 323, 43 2, 702, 1012, 220, 120392, 2849, 2474, 279, 274, 23121, 2400, 13, 3555, 374, 279 , 1482, 6008, 76039, 320, 2916, 8, 897, 315, 419, 8044, 315, 32896, 30, 151337, 1986, 32896, 8044, 594, 1482, 6008, 76039, 320, 2916, 8, 897, 374, 902, 76039, 1 3, 151329] [INFO:swift] [INPUT] [gMASK] <|user|> <|begin_of_image|> <|endoftext|> <|end_of_image|> The sowing took place on Nove mber 2, 2021, and it has been 178 days since the sowing date. What is the curren t plant lodging (PL) value of this variety of wheat? <|assistant|> This wheat va riety's current plant lodging (PL) value is no lodging. <|endoftext|> [INFO:swift] [LABLES_IDS] [-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, - 100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -10 0, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 1986, 32896, 8044, 594, 1482, 6008, 76039, 320, 2916, 8, 897, 374, 902, 76039, 13, 151329] [INFO:swift] [LABLES] [-100 * 51]This wheat variety's current plant lodging (PL) value is no lodging. <|endoftext|> [INFO:swift] training_args: Seq2SeqTrainingArguments( _n_gpu=1, acc_strategy=token, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batc hes': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accum ulation_kwargs': None}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, additional_saved_files=[], auto_find_batch_size=False, batch_eval_metrics=False, bf16=True, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_steps=100, eval_strategy=IntervalStrategy.STEPS, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xlafsdp grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, generation_config=GenerationConfig { "do_sample": true, "eos_token_id": 151329, "max_new_tokens": 2048, "pad_token_id": 151329, "temperature": 0.3, "top_k": 20, "top_p": 0.7 } , generation_max_length=None, generation_num_beams=None, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=False, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=E:\ygf\swift\output\glm4v-9b-chat\v0-20240722-201146/runs, logging_first_step=True, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=IntervalStrategy.STEPS, lr_scheduler_kwargs={}, lr_scheduler_type=SchedulerType.LINEAR, max_grad_norm=0.5, max_steps=-1, metric_for_best_model=loss, metric_warmup_step=0, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=E:\ygf\swift\output\glm4v-9b-chat\v0-20240722-201146, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=1, per_device_train_batch_size=1, predict_with_generate=False, prediction_loss_only=False, push_hub_strategy=push_best, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=E:\ygf\swift\output\glm4v-9b-chat\v0-20240722-201146, save_on_each_node=True, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=IntervalStrategy.STEPS, save_total_limit=2, seed=42, skip_memory_metrics=True, sortish_sampler=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, train_dataset_sample=69182, train_sampler_random=True, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.03, warmup_steps=0, weight_decay=0.1, ) [INFO:swift] The SftArguments will be saved in: E:\ygf\swift\output\glm4v-9b-cha t\v0-20240722-201146\sft_args.json [INFO:swift] The Seq2SeqTrainingArguments will be saved in: E:\ygf\swift\output\ glm4v-9b-chat\v0-20240722-201146\training_args.json [INFO:swift] The logging file will be saved in: E:\ygf\swift\output\glm4v-9b-cha t\v0-20240722-201146\logging.jsonl Train: 0%| | 0/4323 [00:00<?, ?it/s]C :\Users\jxny02.cache\huggingface\modules\transformers_modules\GLM-4V-9B-chat\mo deling_chatglm.py:244: UserWarning: 1Torch was not compiled with flash attention . (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.c pp:455.) context_layer = torch.nn.functional.scaled_dot_product_attention(query_layer, key_layer, value_layer, Traceback (most recent call last): File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\swift\cli\sft .py", line 5, in sft_main() File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\swift\utils\r un_utils.py", line 27, in x_main result = llm_x(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\swift\llm\sft .py", line 308, in llm_sft trainer.train(training_args.resume_from_checkpoint) File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\swift\trainer s\mixin.py", line 518, in train res = super().train(resume_from_checkpoint, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\transformers\ trainer.py", line 1885, in train return inner_training_loop( ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\transformers\ trainer.py", line 2216, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\transformers\ trainer.py", line 3238, in training_step loss = self.compute_loss(model, inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\swift\trainer s\trainers.py", line 183, in compute_loss outputs = model(inputs) ^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modu les\module.py", line 1532, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modu les\module.py", line 1541, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\accelerate\ut ils\operations.py", line 819, in forward return model_forward(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\accelerate\ut ils\operations.py", line 807, in call return convert_to_fp32(self.model_forward(args, kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\amp\aut ocast_mode.py", line 16, in decorate_autocast return func(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\peft\peft_mod el.py", line 1430, in forward return self.base_model( ^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modu les\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modu les\module.py", line 1541, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\peft\tuners\t uners_utils.py", line 179, in forward return self.model.forward(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02.cache\huggingface\modules\transformers_modules\GLM-4V-9 B-chat\modeling_chatglm.py", line 1216, in forward loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.v iew(-1)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modu les\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modu les\module.py", line 1541, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\swift\llm\uti ls\model.py", line 1505, in cross_entropy_forward return __old_forward(self, inputs, target) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modu les\loss.py", line 1185, in forward return F.cross_entropy(input, target, weight=self.weight, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\func tional.py", line 3086, in cross_entropy return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get _enum(reduction), ignore_index, label_smoothing) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: Expected input batch_size (71) to match target batch_size (1670).
Train: 0%| | 0/4323 [00:02<?, ?it/s]

Jintao-Huang commented 3 months ago

使用最新的ms-swift和最新的glm4v的.py文件