LLaVA-VL / LLaVA-NeXT

Apache License 2.0
2.4k stars 167 forks source link

Storage size calculation overflowed when inferencing with LLaVA_OneVision Tutorials #173

Open FrankFcc opened 3 weeks ago

FrankFcc commented 3 weeks ago

I'm not sure what's going on after setting up proper environments and test for the first inference tackling the single image input with LLaVA OneVision.

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[2], [line 39](vscode-notebook-cell:?execution_count=2&line=39)
     [35](vscode-notebook-cell:?execution_count=2&line=35) input_ids = tokenizer_image_token(prompt_question, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(device)
     [36](vscode-notebook-cell:?execution_count=2&line=36) image_sizes = [image.size]
---> [39](vscode-notebook-cell:?execution_count=2&line=39) cont = model.generate(
     [40](vscode-notebook-cell:?execution_count=2&line=40)     input_ids,
     [41](vscode-notebook-cell:?execution_count=2&line=41)     images=image_tensor,
     [42](vscode-notebook-cell:?execution_count=2&line=42)     image_sizes=image_sizes,
     [43](vscode-notebook-cell:?execution_count=2&line=43)     do_sample=False,
     [44](vscode-notebook-cell:?execution_count=2&line=44)     temperature=0,
     [45](vscode-notebook-cell:?execution_count=2&line=45)     max_new_tokens=4096,
     [46](vscode-notebook-cell:?execution_count=2&line=46) )
     [47](vscode-notebook-cell:?execution_count=2&line=47) text_outputs = tokenizer.batch_decode(cont, skip_special_tokens=True)
     [48](vscode-notebook-cell:?execution_count=2&line=48) print(text_outputs)

File ~/.conda/envs/llava/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
    [112](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/utils/_contextlib.py:112) @functools.wraps(func)
    [113](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/utils/_contextlib.py:113) def decorate_context(*args, **kwargs):
    [114](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/utils/_contextlib.py:114)     with ctx_factory():
--> [115](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/utils/_contextlib.py:115)         return func(*args, **kwargs)

File ~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:135, in LlavaQwenForCausalLM.generate(self, inputs, images, image_sizes, modalities, **kwargs)
    [132](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:132) else:
    [133](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:133)     inputs_embeds = self.get_model().embed_tokens(inputs)
--> [135](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:135) return super().generate(position_ids=position_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, **kwargs)

File ~/.conda/envs/llava/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
    [112](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/utils/_contextlib.py:112) @functools.wraps(func)
    [113](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/utils/_contextlib.py:113) def decorate_context(*args, **kwargs):
    [114](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/utils/_contextlib.py:114)     with ctx_factory():
--> [115](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/utils/_contextlib.py:115)         return func(*args, **kwargs)

File ~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1528, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs)
   [1510](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1510)     result = self.assisted_decoding(
   [1511](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1511)         input_ids,
   [1512](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1512)         candidate_generator=candidate_generator,
   (...)
   [1524](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1524)         **model_kwargs,
   [1525](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1525)     )
   [1526](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1526) if generation_mode == GenerationMode.GREEDY_SEARCH:
   [1527](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1527)     # 11. run greedy search
-> [1528](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1528)     result = self._greedy_search(
   [1529](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1529)         input_ids,
   [1530](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1530)         logits_processor=prepared_logits_processor,
   [1531](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1531)         stopping_criteria=prepared_stopping_criteria,
   [1532](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1532)         pad_token_id=generation_config.pad_token_id,
   [1533](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1533)         eos_token_id=generation_config.eos_token_id,
   [1534](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1534)         output_scores=generation_config.output_scores,
   [1535](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1535)         output_logits=generation_config.output_logits,
   [1536](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1536)         return_dict_in_generate=generation_config.return_dict_in_generate,
   [1537](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1537)         synced_gpus=synced_gpus,
   [1538](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1538)         streamer=streamer,
   [1539](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1539)         **model_kwargs,
   [1540](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1540)     )
   [1542](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1542) elif generation_mode == GenerationMode.CONTRASTIVE_SEARCH:
   [1543](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:1543)     if not model_kwargs["use_cache"]:

File ~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:2412, in GenerationMixin._greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, output_logits, return_dict_in_generate, synced_gpus, streamer, **model_kwargs)
   [2409](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:2409) model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
   [2411](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:2411) # forward pass to get next token
-> [2412](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:2412) outputs = self(
   [2413](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:2413)     **model_inputs,
   [2414](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:2414)     return_dict=True,
   [2415](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:2415)     output_attentions=output_attentions,
   [2416](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:2416)     output_hidden_states=output_hidden_states,
   [2417](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:2417) )
   [2419](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:2419) if synced_gpus and this_peer_finished:
   [2420](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/generation/utils.py:2420)     continue  # don't waste resources running the code we don't need

File ~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
   [1516](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1516)     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   [1517](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1517) else:
-> [1518](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1518)     return self._call_impl(*args, **kwargs)

File ~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
   [1522](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1522) # If we don't have any hooks, we want to skip the rest of the logic in
   [1523](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1523) # this function, and just call forward.
   [1524](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1524) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   [1525](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1525)         or _global_backward_pre_hooks or _global_backward_hooks
   [1526](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1526)         or _global_forward_hooks or _global_forward_pre_hooks):
-> [1527](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1527)     return forward_call(*args, **kwargs)
   [1529](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1529) try:
   [1530](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1530)     result = None

File ~/.conda/envs/llava/lib/python3.10/site-packages/accelerate/hooks.py:169, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs)
    [167](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/accelerate/hooks.py:167)         output = module._old_forward(*args, **kwargs)
    [168](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/accelerate/hooks.py:168) else:
--> [169](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/accelerate/hooks.py:169)     output = module._old_forward(*args, **kwargs)
    [170](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/accelerate/hooks.py:170) return module._hf_hook.post_forward(module, output)

File ~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:103, in LlavaQwenForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, images, image_sizes, return_dict, modalities, dpo_forward, cache_position)
    [100](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:100)     return logits, labels
    [102](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:102) else:
--> [103](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:103)     return super().forward(
    [104](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:104)         input_ids=input_ids,
    [105](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:105)         attention_mask=attention_mask,
    [106](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:106)         position_ids=position_ids,
    [107](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:107)         past_key_values=past_key_values,
    [108](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:108)         inputs_embeds=inputs_embeds,
    [109](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:109)         labels=labels,
    [110](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:110)         use_cache=use_cache,
    [111](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:111)         output_attentions=output_attentions,
    [112](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:112)         output_hidden_states=output_hidden_states,
    [113](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:113)         return_dict=return_dict,
    [114](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/LLaVA-NeXT/llava/model/language_model/llava_qwen.py:114)     )

File ~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1168, in Qwen2ForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
   [1165](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1165) return_dict = return_dict if return_dict is not None else self.config.use_return_dict
   [1167](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1167) # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
-> [1168](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1168) outputs = self.model(
   [1169](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1169)     input_ids=input_ids,
   [1170](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1170)     attention_mask=attention_mask,
   [1171](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1171)     position_ids=position_ids,
   [1172](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1172)     past_key_values=past_key_values,
   [1173](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1173)     inputs_embeds=inputs_embeds,
   [1174](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1174)     use_cache=use_cache,
   [1175](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1175)     output_attentions=output_attentions,
   [1176](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1176)     output_hidden_states=output_hidden_states,
   [1177](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1177)     return_dict=return_dict,
   [1178](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1178) )
   [1180](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1180) hidden_states = outputs[0]
   [1181](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1181) logits = self.lm_head(hidden_states)

File ~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
   [1516](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1516)     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   [1517](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1517) else:
-> [1518](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1518)     return self._call_impl(*args, **kwargs)

File ~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
   [1522](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1522) # If we don't have any hooks, we want to skip the rest of the logic in
   [1523](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1523) # this function, and just call forward.
   [1524](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1524) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   [1525](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1525)         or _global_backward_pre_hooks or _global_backward_hooks
   [1526](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1526)         or _global_forward_hooks or _global_forward_pre_hooks):
-> [1527](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1527)     return forward_call(*args, **kwargs)
   [1529](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1529) try:
   [1530](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1530)     result = None

File ~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1053, in Qwen2Model.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
   [1043](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1043)     layer_outputs = self._gradient_checkpointing_func(
   [1044](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1044)         decoder_layer.__call__,
   [1045](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1045)         hidden_states,
   (...)
   [1050](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1050)         use_cache,
   [1051](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1051)     )
   [1052](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1052) else:
-> [1053](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1053)     layer_outputs = decoder_layer(
   [1054](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1054)         hidden_states,
   [1055](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1055)         attention_mask=attention_mask,
   [1056](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1056)         position_ids=position_ids,
   [1057](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1057)         past_key_value=past_key_values,
   [1058](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1058)         output_attentions=output_attentions,
   [1059](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1059)         use_cache=use_cache,
   [1060](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1060)     )
   [1062](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1062) hidden_states = layer_outputs[0]
   [1064](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1064) if use_cache:

File ~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
   [1516](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1516)     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   [1517](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1517) else:
-> [1518](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1518)     return self._call_impl(*args, **kwargs)

File ~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
   [1522](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1522) # If we don't have any hooks, we want to skip the rest of the logic in
   [1523](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1523) # this function, and just call forward.
   [1524](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1524) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   [1525](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1525)         or _global_backward_pre_hooks or _global_backward_hooks
   [1526](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1526)         or _global_forward_hooks or _global_forward_pre_hooks):
-> [1527](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1527)     return forward_call(*args, **kwargs)
   [1529](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1529) try:
   [1530](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1530)     result = None

File ~/.conda/envs/llava/lib/python3.10/site-packages/accelerate/hooks.py:169, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs)
    [167](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/accelerate/hooks.py:167)         output = module._old_forward(*args, **kwargs)
    [168](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/accelerate/hooks.py:168) else:
--> [169](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/accelerate/hooks.py:169)     output = module._old_forward(*args, **kwargs)
    [170](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/accelerate/hooks.py:170) return module._hf_hook.post_forward(module, output)

File ~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:768, in Qwen2DecoderLayer.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache, **kwargs)
    [765](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:765) hidden_states = self.input_layernorm(hidden_states)
    [767](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:767) # Self Attention
--> [768](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:768) hidden_states, self_attn_weights, present_key_value = self.self_attn(
    [769](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:769)     hidden_states=hidden_states,
    [770](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:770)     attention_mask=attention_mask,
    [771](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:771)     position_ids=position_ids,
    [772](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:772)     past_key_value=past_key_value,
    [773](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:773)     output_attentions=output_attentions,
    [774](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:774)     use_cache=use_cache,
    [775](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:775) )
    [776](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:776) hidden_states = residual + hidden_states
    [778](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:778) # Fully Connected

File ~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
   [1516](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1516)     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   [1517](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1517) else:
-> [1518](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1518)     return self._call_impl(*args, **kwargs)

File ~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
   [1522](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1522) # If we don't have any hooks, we want to skip the rest of the logic in
   [1523](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1523) # this function, and just call forward.
   [1524](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1524) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   [1525](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1525)         or _global_backward_pre_hooks or _global_backward_hooks
   [1526](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1526)         or _global_forward_hooks or _global_forward_pre_hooks):
-> [1527](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1527)     return forward_call(*args, **kwargs)
   [1529](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1529) try:
   [1530](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1530)     result = None

File ~/.conda/envs/llava/lib/python3.10/site-packages/accelerate/hooks.py:169, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs)
    [167](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/accelerate/hooks.py:167)         output = module._old_forward(*args, **kwargs)
    [168](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/accelerate/hooks.py:168) else:
--> [169](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/accelerate/hooks.py:169)     output = module._old_forward(*args, **kwargs)
    [170](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/accelerate/hooks.py:170) return module._hf_hook.post_forward(module, output)

File ~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:378, in Qwen2FlashAttention2.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache, **kwargs)
    [376](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:376) # Because the input can be padded, the absolute sequence length depends on the max position id.
    [377](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:377) rotary_seq_len = max(kv_seq_len, position_ids[:, -1].max().item()) + 1
--> [378](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:378) cos, sin = self.rotary_emb(value_states, seq_len=rotary_seq_len)
    [380](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:380) query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
    [382](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:382) use_sliding_windows = (
    [383](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:383)     _flash_supports_window_size
    [384](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:384)     and getattr(self.config, "sliding_window", None) is not None
    [385](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:385)     and kv_seq_len > self.config.sliding_window
    [386](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:386)     and self.config.use_sliding_window
    [387](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:387) )

File ~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
   [1516](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1516)     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   [1517](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1517) else:
-> [1518](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1518)     return self._call_impl(*args, **kwargs)

File ~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
   [1522](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1522) # If we don't have any hooks, we want to skip the rest of the logic in
   [1523](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1523) # this function, and just call forward.
   [1524](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1524) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   [1525](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1525)         or _global_backward_pre_hooks or _global_backward_hooks
   [1526](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1526)         or _global_forward_hooks or _global_forward_pre_hooks):
-> [1527](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1527)     return forward_call(*args, **kwargs)
   [1529](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1529) try:
   [1530](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/torch/nn/modules/module.py:1530)     result = None

File ~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:122, in Qwen2RotaryEmbedding.forward(self, x, seq_len)
    [119](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:119) def forward(self, x, seq_len=None):
    [120](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:120)     # x: [bs, num_attention_heads, seq_len, head_size]
    [121](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:121)     if seq_len > self.max_seq_len_cached:
--> [122](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:122)         self._set_cos_sin_cache(seq_len=seq_len, device=x.device, dtype=x.dtype)
    [124](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:124)     return (
    [125](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:125)         self.cos_cached[:seq_len].to(dtype=x.dtype),
    [126](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:126)         self.sin_cached[:seq_len].to(dtype=x.dtype),
    [127](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:127)     )

File ~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:111, in Qwen2RotaryEmbedding._set_cos_sin_cache(self, seq_len, device, dtype)
    [109](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:109) def _set_cos_sin_cache(self, seq_len, device, dtype):
    [110](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:110)     self.max_seq_len_cached = seq_len
--> [111](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:111)     t = torch.arange(self.max_seq_len_cached, device=device, dtype=torch.int64).type_as(self.inv_freq)
    [113](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:113)     freqs = torch.outer(t, self.inv_freq)
    [114](https://vscode-remote+ssh-002dremote-002bvortex-002eusc-002eedu.vscode-resource.vscode-cdn.net/home/changcheng/LLaVA-NeXT/docs/~/.conda/envs/llava/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:114)     # Different from paper, but it uses a different permutation in order to obtain the same calculation

RuntimeError: Storage size calculation overflowed with sizes=[4644778781179219968]
nikkiwoo-gh commented 2 weeks ago

any solution so far?