Closed rebuttalpapers closed 2 weeks ago
Hi, what is the version of your transformers?
Thanks!
import transformers print(transformers.version)
4.40.0.dev0
My TF version is 4.39.0 and the conv model should be mistral_instruct for LLaVA-NeXT-Video-7B-32K.
BTW, there is "attention_dropout" in "https://huggingface.co/lmms-lab/LLaVA-NeXT-Video-7B-32K/blob/main/config.json"
Thanks @ZhangYuanhan-AI !
What is the "attention_dropout" and how it solves the problem above "AttributeError: 'LlavaMistralConfig' object has no attribute 'attention_bias'"?
Additionally, I downgraded the TF from 4.40.0.dev0 to 4.39.0 and the same problem is still there.
what is the command you use to call LLaVA-NeXT-Video-7B-32K model? (for others it is: bash scripts/video/demo/video_demo.sh lmms-lab/LLaVA-NeXT-Video-7B-DPO vicuna_v1 32 2 True ./data/llava_video/video-chatgpt/evaluation/Test_Videos/v_Lf_7RurLgp0.mp4)
I added the following 3 lines for config, not sure whether they are correct or not: setattr(cfg_pretrained, 'attention_bias', 0) setattr(cfg_pretrained, 'rope_scaling', {"factor": 8.0, "type": "linear"}) setattr(cfg_pretrained, 'pretraining_tp', 1) However, it is not giving any response
Time taken for inference: 2.013814687728882 seconds
Question: [INST]
Response:
try to change this line to
output_ids = model.generate(inputs=input_ids, images=video, attention_mask=attention_masks, modalities="video", do_sample=True, temperature=0.2, max_new_tokens=1024, use_cache=True)
I tried the following conv-mode: vicuna_v1 --conv-mode mistral_direct Llava_llama_2 llama_2 mistral_instruct
and encounter the error as below: AttributeError: 'LlavaMistralConfig' object has no attribute 'attention_bias'