modelscope / ms-swift

Use PEFT or Full-parameter to finetune 400+ LLMs or 100+ MLLMs. (LLM: Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, Gemma2, ...; MLLM: Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL2, MiniCPM-V-2.6, GLM4v, Xcomposer2.5, Yi-VL, DeepSeek-VL, Phi3.5-Vision, ...)
https://swift.readthedocs.io/zh-cn/latest/Instruction/index.html
Apache License 2.0
4.27k stars 377 forks source link

Can't inference finetuned `Florence 2` models. #1299

Closed Aunali321 closed 4 months ago

Aunali321 commented 4 months ago

Describe the bug Just fine-tuned (full) the florence-2-large-ft model and now i can't run them.

Command to reproduce

CUDA_VISIBLE_DEVICES=0 swift infer     --ckpt_dir output/florence-2-large-ft/v4-20240704-120540/checkpoint-315/     --stream false     --max_new_tokens 1024 --model_type florence-2-large-ft 
--max_model_len 1024
run sh: `python /home/azureuser/swift/swift/cli/infer.py --ckpt_dir output/florence-2-large-ft/v4-20240704-120540/checkpoint-315/ --stream false --max_new_tokens 1024 --model_type florence-2-large-ft --max_model_len 1024`
[INFO:swift] Successfully registered `/home/azureuser/swift/swift/llm/data/dataset_info.json`
[INFO:swift] Start time of running main: 2024-07-04 13:04:10.449588
[INFO:swift] ckpt_dir: /home/azureuser/train_vlms/florence/output/florence-2-large-ft/v4-20240704-120540/checkpoint-315
[INFO:swift] Setting model_info['revision']: master
[INFO:swift] Setting self.eval_human: True
[INFO:swift] Setting overwrite_generation_config: True
[INFO:swift] args: InferArguments(model_type='florence-2-large-ft', model_id_or_path='AI-ModelScope/Florence-2-large-ft', model_revision='master', sft_type='full', template_type='florence', infer_backend='pt', ckpt_dir='/home/azureuser/train_vlms/florence/output/florence-2-large-ft/v4-20240704-120540/checkpoint-315', load_args_from_ckpt_dir=True, load_dataset_config=False, eval_human=True, seed=42, dtype='bf16', dataset=[], val_dataset=[], dataset_seed=42, dataset_test_ratio=0.01, show_dataset_sample=10, save_result=True, system=None, tools_prompt='react_en', max_length=None, truncation_strategy='delete', check_dataset_strategy='none', model_name=[None, None], model_author=[None, None], quant_method=None, quantization_bit=0, hqq_axis=0, hqq_dynamic_config_path=None, bnb_4bit_comp_dtype='bf16', bnb_4bit_quant_type='nf4', bnb_4bit_use_double_quant=True, bnb_4bit_quant_storage=None, max_new_tokens=1024, do_sample=True, temperature=0.3, top_k=20, top_p=0.7, repetition_penalty=1.0, num_beams=1, stop_words=None, rope_scaling=None, use_flash_attn=None, ignore_args_error=False, stream=False, merge_lora=False, merge_device_map='cpu', save_safetensors=True, overwrite_generation_config=True, verbose=None, local_repo_path=None, custom_register_path=None, custom_dataset_info=None, device_map_config_path=None, gpu_memory_utilization=0.9, tensor_parallel_size=1, max_model_len=1024, disable_custom_all_reduce=True, enforce_eager=False, vllm_enable_lora=False, vllm_max_lora_rank=16, lora_modules=[], image_input_shape=None, image_feature_size=None, self_cognition_sample=0, train_dataset_sample=-1, val_dataset_sample=None, safe_serialization=None, model_cache_dir=None, merge_lora_and_save=None, custom_train_dataset_path=[], custom_val_dataset_path=[], vllm_lora_modules=None)
[INFO:swift] Global seed set to 42
[INFO:swift] device_count: 1
[INFO:swift] Loading the model using model_dir: /home/azureuser/train_vlms/florence/output/florence-2-large-ft/v4-20240704-120540/checkpoint-315
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[INFO:swift] model_kwargs: {'device_map': 'cuda:0'}
Traceback (most recent call last):
  File "/home/azureuser/swift/swift/cli/infer.py", line 5, in <module>
    infer_main()
  File "/home/azureuser/swift/swift/utils/run_utils.py", line 27, in x_main
    result = llm_x(args, **kwargs)
  File "/home/azureuser/swift/swift/llm/infer.py", line 290, in llm_infer
    model, template = prepare_model_template(args, device_map=device_map)
  File "/home/azureuser/swift/swift/llm/infer.py", line 179, in prepare_model_template
    model, tokenizer = get_model_tokenizer(
  File "/home/azureuser/swift/swift/llm/utils/model.py", line 5387, in get_model_tokenizer
    model, tokenizer = get_function(model_dir, torch_dtype, model_kwargs, load_model, **kwargs)
  File "/home/azureuser/swift/swift/llm/utils/model.py", line 2631, in get_model_tokenizer_florence
    model, tokenizer = get_model_tokenizer_from_repo(
  File "/home/azureuser/swift/swift/llm/utils/model.py", line 934, in get_model_tokenizer_from_repo
    model = automodel_class.from_pretrained(
  File "/home/azureuser/miniconda3/envs/swift/lib/python3.10/site-packages/modelscope/utils/hf_util.py", line 113, in from_pretrained
    module_obj = module_class.from_pretrained(model_dir, *model_args,
  File "/home/azureuser/miniconda3/envs/swift/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained
    return model_class.from_pretrained(
  File "/home/azureuser/miniconda3/envs/swift/lib/python3.10/site-packages/modelscope/utils/hf_util.py", line 76, in from_pretrained
    return ori_from_pretrained(cls, model_dir, *model_args, **kwargs)
  File "/home/azureuser/miniconda3/envs/swift/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3710, in from_pretrained
    model = cls(config, *model_args, **model_kwargs)
  File "/home/azureuser/.cache/huggingface/modules/transformers_modules/checkpoint-315/modeling_florence2.py", line 2529, in __init__
    assert config.vision_config.model_type == 'davit', '

Finetune command

CUDA_VISIBLE_DEVICES=0 swift sft \
 --model_type florence-2-large-ft \
 --dataset output.jsonl \
 --sft_type full \
 --num_train_epochs 3 \
 --batch_size 16 \
 --learning_rate 1e-5 \
 --gradient_accumulation_steps 4 \
 --warmup_ratio 0.1 \
 --max_length 512 \
 --logging_steps 50 \
 --save_strategy epoch \
 --evaluation_strategy epoch

Your hardware and system info Cuda compilation tools, release 12.4, V12.4.99 Build cuda_12.4.r12.4/compiler.33961263_0 torch 2.3.1 torchvision 0.18.1 GPU: A100 80G on Azure

Additional context

Training logs

{"loss": 5.28112936, "acc": 0.27560034, "grad_norm": 35.5, "learning_rate": 3.1e-07, "memory(GiB)": 73.62, "train_speed(iter/s)": 0.131608, "epoch": 0.00950119, "global_step": 1}
{"loss": 4.16663127, "acc": 0.36083926, "grad_norm": 8.9375, "learning_rate": 9.9e-06, "memory(GiB)": 76.16, "train_speed(iter/s)": 0.142979, "epoch": 0.47505938, "global_step": 50}
{"loss": 2.98086548, "acc": 0.46512562, "grad_norm": 9.1875, "learning_rate": 8.64e-06, "memory(GiB)": 76.16, "train_speed(iter/s)": 0.144274, "epoch": 0.95011876, "global_step": 100}
{"eval_loss": 2.72822046, "eval_acc": 0.46707317, "eval_runtime": 4.4326, "eval_samples_per_second": 15.341, "eval_steps_per_second": 1.128, "epoch": 0.9976247, "global_step": 105}
{"loss": 2.73379517, "acc": 0.4934227, "grad_norm": 9.75, "learning_rate": 6.29e-06, "memory(GiB)": 76.17, "train_speed(iter/s)": 0.142259, "epoch": 1.42517815, "global_step": 150}
{"loss": 2.66033661, "acc": 0.50013248, "grad_norm": 9.3125, "learning_rate": 3.55e-06, "memory(GiB)": 76.17, "train_speed(iter/s)": 0.141169, "epoch": 1.90023753, "global_step": 200}
{"eval_loss": 2.58566856, "eval_acc": 0.48414634, "eval_runtime": 3.6366, "eval_samples_per_second": 18.699, "eval_steps_per_second": 1.375, "epoch": 1.99524941, "global_step": 210}
{"loss": 2.63713165, "acc": 0.50154091, "grad_norm": 8.4375, "learning_rate": 1.25e-06, "memory(GiB)": 76.17, "train_speed(iter/s)": 0.141292, "epoch": 2.37529691, "global_step": 250}
{"loss": 2.622659, "acc": 0.50444355, "grad_norm": 8.4375, "learning_rate": 7e-08, "memory(GiB)": 76.17, "train_speed(iter/s)": 0.141353, "epoch": 2.85035629, "global_step": 300}
{"eval_loss": 2.57908154, "eval_acc": 0.48536585, "eval_runtime": 4.4806, "eval_samples_per_second": 15.176, "eval_steps_per_second": 1.116, "epoch": 2.99287411, "global_step": 315}
{"eval_loss": 2.57908154, "eval_acc": 0.48536585, "eval_runtime": 4.4909, "eval_samples_per_second": 15.142, "eval_steps_per_second": 1.113, "epoch": 2.99287411, "global_step": 315}
{"train_runtime": 2251.9759, "train_samples_per_second": 8.968, "train_steps_per_second": 0.14, "total_flos": 185713650794496.0, "train_loss": 2.95275494, "epoch": 2.99287411, "global_step": 315}
{"memory": {"cuda": "76.17GiB"}, "train_time": {"train_runtime": 2251.9759, "n_train_samples": 6732, "train_samples_per_second": 2.9893747974834013}, "last_model_checkpoint": "/home/azureuser/train_vlms/florence/output/florence-2-large-ft/v4-20240704-120540/checkpoint-315", "best_model_checkpoint": "/home/azureuser/train_vlms/florence/output/florence-2-large-ft/v4-20240704-120540/checkpoint-315", "best_metric": 2.57908154, "global_step": 315, "log_history": [{"loss": 5.28112936, "acc": 0.27560034, "grad_norm": 35.5, "learning_rate": 3.125e-07, "memory(GiB)": 73.62, "train_speed(iter/s)": 0.131608, "epoch": 0.009501187648456057, "step": 1}, {"loss": 4.16663127, "acc": 0.36083926, "grad_norm": 8.9375, "learning_rate": 9.90051298775959e-06, "memory(GiB)": 76.16, "train_speed(iter/s)": 0.142979, "epoch": 0.4750593824228028, "step": 50}, {"loss": 2.98086548, "acc": 0.46512562, "grad_norm": 9.1875, "learning_rate": 8.641802027869774e-06, "memory(GiB)": 76.16, "train_speed(iter/s)": 0.144274, "epoch": 0.9501187648456056, "step": 100}, {"eval_loss": 2.7282204627990723, "eval_acc": 0.46707317073170734, "eval_runtime": 4.4326, "eval_samples_per_second": 15.341, "eval_steps_per_second": 1.128, "epoch": 0.997624703087886, "step": 105}, {"loss": 2.73379517, "acc": 0.4934227, "grad_norm": 9.75, "learning_rate": 6.289626849272062e-06, "memory(GiB)": 76.17, "train_speed(iter/s)": 0.142259, "epoch": 1.4251781472684084, "step": 150}, {"loss": 2.66033661, "acc": 0.50013248, "grad_norm": 9.3125, "learning_rate": 3.55023654904709e-06, "memory(GiB)": 76.17, "train_speed(iter/s)": 0.141169, "epoch": 1.9002375296912115, "step": 200}, {"eval_loss": 2.5856685638427734, "eval_acc": 0.4841463414634146, "eval_runtime": 3.6366, "eval_samples_per_second": 18.699, "eval_steps_per_second": 1.375, "epoch": 1.995249406175772, "step": 210}, {"loss": 2.63713165, "acc": 0.50154091, "grad_norm": 8.4375, "learning_rate": 1.2461429637659466e-06, "memory(GiB)": 76.17, "train_speed(iter/s)": 0.141292, "epoch": 2.375296912114014, "step": 250}, {"loss": 2.622659, "acc": 0.50444355, "grad_norm": 8.4375, "learning_rate": 6.91585183706428e-08, "memory(GiB)": 76.17, "train_speed(iter/s)": 0.141353, "epoch": 2.850356294536817, "step": 300}, {"eval_loss": 2.5790815353393555, "eval_acc": 0.4853658536585366, "eval_runtime": 4.4806, "eval_samples_per_second": 15.176, "eval_steps_per_second": 1.116, "epoch": 2.992874109263658, "step": 315}, {"eval_loss": 2.5790815353393555, "eval_acc": 0.4853658536585366, "eval_runtime": 4.4909, "eval_samples_per_second": 15.142, "eval_steps_per_second": 1.113, "epoch": 2.992874109263658, "step": 315}, {"train_runtime": 2251.9759, "train_samples_per_second": 8.968, "train_steps_per_second": 0.14, "total_flos": 185713650794496.0, "train_loss": 2.9527549425760906, "epoch": 2.992874109263658, "step": 315}], "model_info": "Florence2ForConditionalGeneration: 822.6939M Params (822.6939M Trainable [100.0000%]), 0.2561M Buffers.", "dataset_info": null}
Aunali321 commented 4 months ago

This is the fix: https://huggingface.co/microsoft/Florence-2-base-ft/discussions/14#66866b5643738edbd3301da1

hjh0119 commented 4 months ago

get it, thanks