LLaVA-VL / LLaVA-NeXT

1.01k stars 55 forks source link

What is the conv-mode for LLaVA-NeXT-Video-7B-32K #54

Closed rebuttalpapers closed 2 weeks ago

rebuttalpapers commented 3 weeks ago

I tried the following conv-mode: vicuna_v1 --conv-mode mistral_direct Llava_llama_2 llama_2 mistral_instruct

and encounter the error as below: AttributeError: 'LlavaMistralConfig' object has no attribute 'attention_bias'

ZhangYuanhan-AI commented 3 weeks ago

Hi, what is the version of your transformers?

rebuttalpapers commented 3 weeks ago

Thanks!

import transformers print(transformers.version)

4.40.0.dev0

ZhangYuanhan-AI commented 3 weeks ago

My TF version is 4.39.0 and the conv model should be mistral_instruct for LLaVA-NeXT-Video-7B-32K.

BTW, there is "attention_dropout" in "https://huggingface.co/lmms-lab/LLaVA-NeXT-Video-7B-32K/blob/main/config.json"

rebuttalpapers commented 3 weeks ago

Thanks @ZhangYuanhan-AI !

  1. What is the "attention_dropout" and how it solves the problem above "AttributeError: 'LlavaMistralConfig' object has no attribute 'attention_bias'"?

  2. Additionally, I downgraded the TF from 4.40.0.dev0 to 4.39.0 and the same problem is still there.

  3. what is the command you use to call LLaVA-NeXT-Video-7B-32K model? (for others it is: bash scripts/video/demo/video_demo.sh lmms-lab/LLaVA-NeXT-Video-7B-DPO vicuna_v1 32 2 True ./data/llava_video/video-chatgpt/evaluation/Test_Videos/v_Lf_7RurLgp0.mp4)

  4. I added the following 3 lines for config, not sure whether they are correct or not: setattr(cfg_pretrained, 'attention_bias', 0) setattr(cfg_pretrained, 'rope_scaling', {"factor": 8.0, "type": "linear"}) setattr(cfg_pretrained, 'pretraining_tp', 1) However, it is not giving any response

Time taken for inference: 2.013814687728882 seconds Question: [INST] Please provide a detailed description of the video, focusing on the main subjects, their actions, and the background scenes [/INST]

Response:

ZhangYuanhan-AI commented 2 weeks ago

https://github.com/LLaVA-VL/LLaVA-NeXT/blob/6944062b9bb2e61c48436f1a65c3ea339095ec91/playground/demo/video_demo.py#L160

try to change this line to

output_ids = model.generate(inputs=input_ids, images=video, attention_mask=attention_masks, modalities="video", do_sample=True, temperature=0.2, max_new_tokens=1024, use_cache=True)