haotian-liu / LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
https://llava.hliu.cc
Apache License 2.0
19.12k stars 2.1k forks source link

Fine-tuning LLaVA with CLIP Vision Encoder: Scaling Up from 336x336 to 500x500 Images #1090

Open Nomiluks opened 7 months ago

Nomiluks commented 7 months ago

Question

In the process of scaling up the input image size within clip_encoder.py, the following adjustments have been made:

    def load_model(self, device_map=None):
        if self.is_loaded:
            print('{} is already loaded, `load_model` called again, skipping.'.format(self.vision_tower_name))
            return

        self.image_processor = CLIPImageProcessor.from_pretrained(self.vision_tower_name)
        # To avoid cropping the image
        self.image_processor.crop_size = {'height': 500, 'width': 500}
        self.image_processor.size = {'shortest_edge': 500}
        # self.image_processor.do_center_crop = False
        # self.image_processor.padding = True
        # self.image_processor.do_resize = False

        self.vision_tower = CLIPVisionModel.from_pretrained(self.vision_tower_name, device_map=device_map)
        self.vision_tower.requires_grad_(False)

        self.is_loaded = True

However, despite making these adjustments, I encountered an error during training, particularly within the modeling_clip.py file. It seems further troubleshooting and debugging are needed to resolve this issue.

Any insights or assistance on how to tackle this problem would be greatly appreciated.

Traceback (most recent call last): File "/noman-workspace/LLaVA/llava/train/train_mem.py", line 4, in train(attn_implementation="flash_attention_2") File "/noman-workspace/LLaVA/llava/train/train.py", line 970, in train trainer.train() File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train return inner_training_loop( File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/trainer.py", line 1869, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/trainer.py", line 2772, in training_step loss = self.compute_loss(model, inputs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/trainer.py", line 2795, in compute_loss outputs = model(inputs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn ret_val = func(*args, kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1833, in forward loss = self.module(*inputs, *kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl result = forward_call(*args, kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/peft/peft_model.py", line 922, in forward return self.base_model( File "/opt/conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl result = forward_call(args, kwargs) File "/noman-workspace/LLaVA/llava/model/language_model/llava_llama.py", line 81, in forward ) = self.prepare_inputs_labels_for_multimodal( File "/noman-workspace/LLaVA/llava/model/llava_arch.py", line 202, in prepare_inputs_labels_for_multimodal image_features = self.encode_images(images) File "/noman-workspace/LLaVA/llava/model/llava_arch.py", line 141, in encode_images image_features = self.get_model().get_vision_tower()(images) File "/opt/conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl result = forward_call(*args, *kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "/noman-workspace/LLaVA/llava/model/multimodal_encoder/clip_encoder.py", line 61, in forward image_forward_outs = self.vision_tower(images.to(device=self.device, dtype=self.dtype), output_hidden_states=True) File "/opt/conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl result = forward_call(*args, *kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 917, in forward return self.vision_model( File "/opt/conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl result = forward_call(*args, kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 841, in forward hidden_states = self.embeddings(pixel_values) File "/opt/conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl result = forward_call(args, kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 187, in forward embeddings = embeddings + self.position_embedding(self.position_ids) RuntimeError: The size of tensor a (1226) must match the size of tensor b (577) at non-singleton dimension 1

jimchenhub commented 5 months ago

It seems like you have to interpolate the position embedding. Have you solved this problem?

GewelsJI commented 3 months ago

It seems like you have to interpolate the position embedding. Have you solved this problem?

Do you know how to interpolate the position embedding?