DAMO-NLP-SG / VideoLLaMA2

VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs
Apache License 2.0
871 stars 60 forks source link

Problem: Segmentation fault (core dumped) #95

Closed CamellIyquitous closed 3 weeks ago

CamellIyquitous commented 1 month ago

When i run gradio_web_server_adhoc.py, i met the problem: Segmentation fault (core dumped).

I could successfully run the gradio web on my browser, but when i tried to upload the video and began to chat, the terminal output error: Segmentation fault (core dumped). The same error happened when i tried to run the inference as follows:

import sys sys.path.append('./') from videollama2 import model_init, mm_infer from videollama2.utils import disable_torch_init

def inference(): disable_torch_init()

# Video Inference
modal = 'video'
modal_path = 'assets/cat_and_chicken.mp4' 
instruct = 'What animals are in the video, what are they doing, and how does the video feel?'
# Reply:
# The video features a kitten and a baby chick playing together. The kitten is seen laying on the floor while the baby chick hops around. The two animals interact playfully with each other, and the video has a cute and heartwarming feel to it.

# Image Inference
modal = 'image'
modal_path = 'assets/sora.png'
instruct = 'What is the woman wearing, what is she doing, and how does the image feel?'
# Reply:
# The woman in the image is wearing a black coat and sunglasses, and she is walking down a rain-soaked city street. The image feels vibrant and lively, with the bright city lights reflecting off the wet pavement, creating a visually appealing atmosphere. The woman's presence adds a sense of style and confidence to the scene, as she navigates the bustling urban environment.

model_path = 'DAMO-NLP-SG/VideoLLaMA2-7B'
# Base model inference (only need to replace model_path)
# model_path = 'DAMO-NLP-SG/VideoLLaMA2-7B-Base'
model, processor, tokenizer = model_init(model_path)
output = mm_infer(processor[modal](modal_path), instruct, model=model, tokenizer=tokenizer, do_sample=False, modal=modal)

print(output)

if name == "main": inference()

OctaAcid commented 1 month ago

Maybe you should check your CUDA version. In requirements.txt, it says --extra-index-url https://download.pytorch.org/whl/cu118, which means it will install torch for CUDA 11.8. That lead to failure when you use different version of CUDA like CUDA 12.2.

clownrat6 commented 1 month ago

Segmentation Fault (core dumped) is mainly caused by unconsistency between built-in cuda version of torch and substantial cuda version of machine. Please check if the cuda version is aligned.

babyta commented 1 week ago

版本一致,还是有这个问题。torch是11.8,主机也是11.8 求问有解决方法吗

LiangMeng89 commented 2 days ago

版本一致,还是有这个问题。torch是11.8,主机也是11.8 求问有解决方法吗

Hello,I'm a phD student from ZJU, I also use videollama2 to do my own research,we create a WeChat group to discuss some issues of videollama2 and help each other,could you join us? Please contact me: WeChat number == LiangMeng19357260600, phone number == +86 19357260600,e-mail == liangmeng89@zju.edu.cn.

babyta commented 2 days ago

这是来自QQ邮箱的假期自动回复邮件。您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。