BAAI-DCAI / Bunny

A family of lightweight multimodal models.
Apache License 2.0
894 stars 67 forks source link

Tokenization mismatch #75

Closed swhoosh closed 2 months ago

swhoosh commented 4 months ago

I tried finetuning my model after stage 1. Apparently, there are tokenization mismatches and the loss is 0. Do you have any ideas what might be the problem. Thanks!

sh finetune_full.sh

WARNING: tokenization mismatch: 153 vs. 154. (ignored)
WARNING: tokenization mismatch: 167 vs. 168. (ignored)
WARNING: tokenization mismatch: 56 vs. 57. (ignored)
WARNING: tokenization mismatch: 96 vs. 97. (ignored)
WARNING: tokenization mismatch: 56 vs. 57. (ignored)
WARNING: tokenization mismatch: 62 vs. 63. (ignored)
{'loss': 0.0, 'grad_norm': 0.0, 'learning_rate': 2.6490066225165566e-08, 'epoch': 0.0}
Isaachhh commented 4 months ago

image

Do you use accurate VERSION?

swhoosh commented 4 months ago

I believe so. Here is my finetune_full.sh. fyi I was able to train using a variation of llama-3, namely aaditya/Llama3-OpenBioLLM-8B. I pretrained it myself in a similar manner to meta-llama/Meta-Llama-3-8B-Instruct. Both use the same code just difference base model.

#!/bin/bash

MODEL_TYPE=llama3-8b

PRETRAIN_DIR=bunny-llama3-8b-pretrain
OUTPUT_DIR=bunny-$MODEL_TYPE-s2

mkdir -p ./checkpoints-finetune/$OUTPUT_DIR

deepspeed bunny/train/train.py \
    --use_s2 True \
    --unfreeze_vision_tower False \
    --deepspeed ./script/deepspeed/zero3.json \
    --model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct \
    --model_type $MODEL_TYPE \
    --version llama \
    --data_path /image_text/train_list/merged/train_merged_first.json \
    --image_folder /image_text/datasets \
    --vision_tower google/siglip-so400m-patch14-384 \
    --pretrain_mm_mlp_adapter ./checkpoints-pretrain/$PRETRAIN_DIR/mm_projector.bin \
    --mm_projector_type mlp2x_gelu \
    --mm_projector_lr 1e-05 \
    --image_aspect_ratio pad \
    --group_by_modality_length False \
    --bf16 True \
    --output_dir ./checkpoints-finetune/$OUTPUT_DIR \
    --num_train_epochs 3 \
    --per_device_train_batch_size 8 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 2 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 2000 \
    --save_total_limit 50 \
    --learning_rate 2e-5 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --tf32 True \
    --model_max_length 2048 \
    --gradient_checkpointing True \
    --dataloader_num_workers 8 \
    --lazy_preprocess True \
    --report_to none | tee 2>&1 ./checkpoints-finetune/$OUTPUT_DIR/log.txt

Here is my pretrained config.json of both models.

    {
  "_name_or_path": "aaditya/Llama3-OpenBioLLM-8B",
  "architectures": [
    "LlamaForCausalLM"
  ],
  "attention_bias": false,
  "attention_dropout": 0.0,
  "bos_token_id": 128000,
  "eos_token_id": 128001,
  "freeze_mm_mlp_adapter": false,
  "hidden_act": "silu",
  "hidden_size": 4096,
  "image_aspect_ratio": "square",
  "initializer_range": 0.02,
  "intermediate_size": 14336,
  "max_position_embeddings": 8192,
  "mm_hidden_size": 3456,
  "mm_projector_lr": null,
  "mm_projector_type": "mlp2x_gelu",
  "mm_vision_tower": "google/siglip-so400m-patch14-384",
  "model_type": "bunny-llama",
  "num_attention_heads": 32,
  "num_hidden_layers": 32,
  "num_key_value_heads": 8,
  "pretraining_tp": 1,
  "rms_norm_eps": 1e-05,
  "rope_scaling": null,
  "rope_theta": 500000.0,
  "tie_word_embeddings": false,
  "tokenizer_model_max_length": 2048,
  "tokenizer_padding_side": "right",
  "torch_dtype": "bfloat16",
  "transformers_version": "4.39.3",
  "tune_mm_mlp_adapter": true,
  "unfreeze_vision_tower": false,
  "use_cache": true,
  "use_mm_proj": true,
  "use_s2": true,
  "vocab_size": 128256
}

{
  "_name_or_path": "meta-llama/Meta-Llama-3-8B-Instruct",
  "architectures": [
    "LlamaForCausalLM"
  ],
  "attention_bias": false,
  "attention_dropout": 0.0,
  "bos_token_id": 128000,
  "eos_token_id": 128001,
  "freeze_mm_mlp_adapter": false,
  "hidden_act": "silu",
  "hidden_size": 4096,
  "image_aspect_ratio": "square",
  "initializer_range": 0.02,
  "intermediate_size": 14336,
  "max_position_embeddings": 8192,
  "mm_hidden_size": 3456,
  "mm_projector_lr": null,
  "mm_projector_type": "mlp2x_gelu",
  "mm_vision_tower": "google/siglip-so400m-patch14-384",
  "model_type": "bunny-llama",
  "num_attention_heads": 32,
  "num_hidden_layers": 32,
  "num_key_value_heads": 8,
  "pretraining_tp": 1,
  "rms_norm_eps": 1e-05,
  "rope_scaling": null,
  "rope_theta": 500000.0,
  "tie_word_embeddings": false,
  "tokenizer_model_max_length": 2048,
  "tokenizer_padding_side": "right",
  "torch_dtype": "bfloat16",
  "transformers_version": "4.39.3",
  "tune_mm_mlp_adapter": true,
  "unfreeze_vision_tower": false,
  "use_cache": true,
  "use_mm_proj": true,
  "use_s2": true,
  "vocab_size": 128256
}
Isaachhh commented 4 months ago

There was a bug that HF Llama-3 wouldn't prepend bos as expected. commit here

Please check whether your model weights are up-to-date.

swhoosh commented 4 months ago

Still didn't fix it. I have deleted cached weights and check that the tokenizer.json is the same as the link you provided.

Isaachhh commented 4 months ago

We notice that Llama-3 changes eos_token from <|end_of_text|> to <|eot_id|> 2 days ago. We will fix it soon.

Gary2018X commented 4 months ago

In theory, can I also use this model? https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat Training also has the same error message I manually modified this file tokenizer.json

id:128001
content:"<|eot_id|>"
single_word:false
lstrip:false
rstrip:false
normalized:false
special:true

I received an error while training again

Traceback (most recent call last):
  File "/root/siton-glusterfs-eaxtsxdfs/xts/projects/Bunny/bunny/train/train.py", line 399, in <module>
    train()
  File "/root/siton-glusterfs-eaxtsxdfs/xts/projects/Bunny/bunny/train/train.py", line 375, in train
    trainer.train(resume_from_checkpoint=True)
  File "/opt/conda/lib/python3.9/site-packages/transformers/trainer.py", line 1780, in train
    return inner_training_loop(
  File "/opt/conda/lib/python3.9/site-packages/transformers/trainer.py", line 1954, in _inner_training_loop
    deepspeed_load_checkpoint(
  File "/opt/conda/lib/python3.9/site-packages/transformers/integrations/deepspeed.py", line 429, in deepspeed_load_checkpoint
    load_path, _ = deepspeed_engine.load_checkpoint(
  File "/opt/conda/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 2763, in load_checkpoint
    success = self._load_zero_checkpoint(load_dir, tag, load_optimizer_states=load_optimizer_states)
  File "/opt/conda/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 2954, in _load_zero_checkpoint
    self.optimizer.load_state_dict(state_dict_list=zero_sd_list,
  File "/opt/conda/lib/python3.9/site-packages/deepspeed/runtime/zero/stage3.py", line 2625, in load_state_dict
    self._rigid_load_state_dict(state_dict_list[dist.get_rank(group=self.dp_process_group)],
  File "/opt/conda/lib/python3.9/site-packages/deepspeed/runtime/zero/stage3.py", line 2573, in _rigid_load_state_dict
    curr_param.data.copy_(saved_param.data)
RuntimeError: The size of tensor a (449309440) must match the size of tensor b (21495808) at non-singleton dimension 0

MODEL_TYPE=llama3-8b

PRETRAIN_DIR=bunny-$MODEL_TYPE-pretrain OUTPUT_DIR=bunny-lora-juzaol-$MODEL_TYPE

mkdir -p ./checkpoints-$MODEL_TYPE/$OUTPUT_DIR

deepspeed bunny/train/train.py \ --lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5 \ --deepspeed ./script/deepspeed/zero3.json \ --model_name_or_path /models/Llama3-8B-Chinese-Chat \ --model_type $MODEL_TYPE \ --version llama \ --data_path Bunny.json \ --image_folder /image \ --vision_tower /models/siglip-so400m-patch14-384 \ --pretrain_mm_mlp_adapter /models/bunny-pretrain-llama3-8b-siglip/mm_projector.bin \ --mm_projector_type mlp2x_gelu \ --image_aspect_ratio pad \ --group_by_modality_length False \ --bf16 True \ --output_dir ./checkpoints-$MODEL_TYPE/$OUTPUT_DIR \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 2 \ --gradient_accumulation_steps 1 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 500 \ --save_total_limit 1 \ --learning_rate 2e-4 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --dataloader_num_workers 4 \ --lazy_preprocess True \ --unfreeze_vision_tower True\ --report_to none | tee 2>&1 ./checkpoints-$MODEL_TYPE/$OUTPUT_DIR/log.txt


Meta-Llama-3-8B-Instruct is  also the same error
Is there anything else that needs to be modified?
Isaachhh commented 4 months ago

Try to edit here other than editing the configuration of base model.

Gary2018X commented 4 months ago

I changed it to this

conv_llama = Conversation(
    system="A chat between a curious user and an artificial intelligence assistant. "
           "The assistant gives helpful, detailed, and polite answers to the user's questions.",
    roles=("USER", "ASSISTANT"),
    version="llama",
    messages=(),
    offset=0,
    sep_style=SeparatorStyle.TWO,
    sep=" ",
    sep2="<|eot_id|>",
)

training is ok But when I merge the models, there is an error here

Traceback (most recent call last):
  File "/root/siton-glusterfs-eaxtsxdfs/xts/projects/Bunny/script/merge_lora_weights.py", line 26, in <module>
    merge_lora(args)
  File "/root/siton-glusterfs-eaxtsxdfs/xts/projects/Bunny/script/merge_lora_weights.py", line 13, in merge_lora
    model.save_pretrained(args.save_model_path)
  File "/opt/conda/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2468, in save_pretrained
    safe_save_file(shard, os.path.join(save_directory, shard_file), metadata={"format": "pt"})
  File "/opt/conda/lib/python3.9/site-packages/safetensors/torch.py", line 281, in save_file
    serialize_file(_flatten(tensors), filename, metadata=metadata)
safetensors_rust.SafetensorError: Error while serializing: IoError(Os { code: 5, kind: Uncategorized, message: "Input/output error" })

merged sh

python script/merge_lora_weights.py \
    --model-path ./checkpoints-llama3-8b/bunny-lora-juzaol-llama3-8b \
    --model-base ./Llama3-8B-Chinese-Chat \
    --model-type llama3-8b \
    --save-model-path ./models/juzao_modelllama38b
swhoosh commented 4 months ago

Try to edit here other than editing the configuration of base model.

I edited the config as you said. The training was fine but during inference i got

 "text": "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!",

for every sample.

The output tokens look like this

tensor([[128000,     32,   6369,   1990,    264,  22999,   1217,    323,    459,
          21075,  11478,  18328,     13,    578,  18328,   6835,  11190,     11,
          11944,     11,    323,  48887,  11503,    311,    279,   1217,    596,
           4860,     13,  14194,     25,    220,   -200,    198,  22818,    279,
          15489,    865,  30630,   2217,     11,   7664,  14955,     25,  36660,
           3931,   2891,     25,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0]], device='cuda:0')
Isaachhh commented 4 months ago

@Gary2018X It seems no relation with Bunny. Please try to Google.

Isaachhh commented 4 months ago

@swhoosh What about the loss curve?

swhoosh commented 4 months ago

@swhoosh What about the loss curve?

I trained for only 20 steps just to test it out first. The loss seemed fine when I trained full epochs yesterday where I edited the model's config instead of the bunny's like what you recommended. However, they still had the same problem. FYI, I was able to get the expected result from aaditya/Llama3-OpenBioLLM-8B so I think it should be problem with the config?

Isaachhh commented 4 months ago

Maybe there exists a huge gap between medical images and knowledge and regular images and knowledge.

swhoosh commented 4 months ago

Well, Phi-2 did actually work during our testing and I was able to get llama3 to work before the recent config update. Can you try reproducing the finetuning result on your end to ensure that the model is behaving correctly?

Isaachhh commented 4 months ago

https://github.com/BAAI-DCAI/Bunny/commit/5d9283b59a77972ecbf67fc2d1a814738b091548 works relatively well on our experiments. But we are still working on training and there may still be bugs now.

BTW, the change in train.py is no more needed because this commit.

Gary2018X commented 4 months ago

@Gary2018X It seems no relation with Bunny. Please try to Google.

5d9283b works relatively well on our experiments. But we are still working on training and there may still be bugs now.

BTW, the change in train.py is no more needed because this commit.

After being consistent with this change, my problem was resolved and I was able to normally. Thank you very much

swhoosh commented 4 months ago

@Gary2018X are you able to inference? My inference still produce the same result as

Try to edit here other than editing the configuration of base model.

I edited the config as you said. The training was fine but during inference i got

 "text": "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!",

for every sample.

The output tokens look like this

tensor([[128000,     32,   6369,   1990,    264,  22999,   1217,    323,    459,
          21075,  11478,  18328,     13,    578,  18328,   6835,  11190,     11,
          11944,     11,    323,  48887,  11503,    311,    279,   1217,    596,
           4860,     13,  14194,     25,    220,   -200,    198,  22818,    279,
          15489,    865,  30630,   2217,     11,   7664,  14955,     25,  36660,
           3931,   2891,     25,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0,      0,      0,      0,      0,
              0,      0,      0,      0,      0]], device='cuda:0')

I had check my llama3 version / use the latest dev branch.

Gary2018X commented 4 months ago

It may be related to your base model I used Llama3-8B-Chinese-Chat instead of Meta-Llama-3-8B-Instruct Their tokenizer is different

Gary2018X commented 4 months ago

Although I can infer normally, the result is not as good as qwen1.8b yet

Isaachhh commented 4 months ago

@swhoosh @Gary2018X We would keep using <|end_of_text|> as eos_token for Bunny-Llama3.

Isaachhh commented 2 months ago

Close the issue for now if there's no further discussions. Feel free to reopen it if there's any other questions.