LLaVA-VL / LLaVA-NeXT

Apache License 2.0
2.59k stars 202 forks source link

Pretrain checkpoint inference #285

Open baochi0212 opened 2 days ago

baochi0212 commented 2 days ago

I pretrain with script

torchrun --nproc_per_node="${NUM_GPUS}" --nnodes="${NNODES}" \
    "./llava/train/train_mem.py" \
    --model_name_or_path ${LLM_VERSION} \
    --version ${PROMPT_VERSION} \
    --data_path ./pretrain_llava.yaml \
    --image_folder ./main/data \
    --vision_tower ${VISION_MODEL_VERSION} \
    --tune_mm_mlp_adapter \
    --mm_tunable_parts="mm_mlp_adapter" \
    --mm_vision_select_layer -2 \
    --mm_projector_type linear \
    --mm_use_im_start_end False \
    --mm_use_im_patch_token False \
    --fp16 True \
    --output_dir ./checkpoints/projectors/${BASE_RUN_NAME} \
    --num_train_epochs 1 \
    --per_device_train_batch_size 8 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 4 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 500 \
    --learning_rate 2e-3 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --tf32 False \
    --model_max_length 4096 \
    --gradient_checkpointing True \
    --dataloader_num_workers 16 \
    --lazy_preprocess True \
    --report_to "none" \
    --run_name $BASE_RUN_NAME \
    --attn_implementation sdpa

And the loss converged pretty well but when run inference

# pip install git+https://github.com/LLaVA-VL/LLaVA-NeXT.git
from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path, process_images, tokenizer_image_token
from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN, IGNORE_INDEX
from llava.conversation import conv_templates, SeparatorStyle

from PIL import Image
import requests
import copy
import torch

import sys
import warnings

warnings.filterwarnings("ignore")
#pretrained = "/raid/phogpt_team/chitb/eval/MiniCPM-V/eval_mm/vlmevalkit/llava-onevision-qwen2-0.5b-finetune_multilingual_400K"
#pretrained = "/raid/phogpt_team/chitb/checkpoint_spp/llava-onevision-qwen2-0.5b-si"
pretrained = "./checkpoints/projectors/pretrain_blip/checkpoint-500"
model_name = "llava_qwen"
device = "cuda"
device_map = "auto"
model_base="./main/checkpoint_spp/Qwen2-0.5B-Instruct"
#model_base = None
tokenizer, model, image_processor, max_length = load_pretrained_model(model_path=pretrained, model_base=model_base, model_name=model_name, attn_implementation='sdpa')  # Add any other thing you want to pass in llava_model_args("???", model.device)
#model = model.cuda()
model.eval()

url = "./main/test/test.png"
image = Image.open(url).convert("RGB")
print("image processor: ", image_processor)
image_tensor = process_images([image], image_processor, model.config)
image_tensor = [_image.to(dtype=torch.float16, device=device) for _image in image_tensor]
print("Image tensor: ", image_tensor[0].shape)
conv_template = "qwen_1_5"  # Make sure you use correct chat template for different models
question = DEFAULT_IMAGE_TOKEN + "\nWho are you" 
conv = copy.deepcopy(conv_templates[conv_template])
conv.append_message(conv.roles[0], question)
conv.append_message(conv.roles[1], None)
prompt_question = conv.get_prompt()
print("PROMPT: ", len(image_tensor), prompt_question)
input_ids = tokenizer_image_token(prompt_question, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(device)
image_sizes = [image.size]

cont = model.generate(
    input_ids,
    images=image_tensor,
    image_sizes=image_sizes,
    do_sample=False, 
    max_new_tokens=256,
)
text_outputs = tokenizer.batch_decode(cont, skip_special_tokens=True)
print(text_outputs)
Model Class: LlavaQwenForCausalLM
image processor:  <llava.model.multimodal_encoder.siglip_encoder.SigLipImageProcessor object at 0x7ff4ea426a40>
Image tensor:  torch.Size([3, 384, 384])
PROMPT:  1 <|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
<image>
Who are you<|im_end|>
<|im_start|>assistant

Input embeds:  torch.Size([1, 752, 896])
The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Starting from v4.46, the `logits` model output will have the same type as the model (except at train time, where it will always be FP32)
['!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!']

This is really weird because training/inference codebase and training loss seems alike the llava-like codebase I used before, but I still can't figure out the bug!!!

baochi0212 commented 2 days ago

I try torch and transformers as in requirements and the latest version as well, both fails