Closed CamellIyquitous closed 3 weeks ago
Maybe you should check your CUDA version. In requirements.txt, it says --extra-index-url https://download.pytorch.org/whl/cu118, which means it will install torch for CUDA 11.8. That lead to failure when you use different version of CUDA like CUDA 12.2.
Segmentation Fault (core dumped) is mainly caused by unconsistency between built-in cuda version of torch and substantial cuda version of machine. Please check if the cuda version is aligned.
版本一致,还是有这个问题。torch是11.8,主机也是11.8 求问有解决方法吗
版本一致,还是有这个问题。torch是11.8,主机也是11.8 求问有解决方法吗
Hello,I'm a phD student from ZJU, I also use videollama2 to do my own research,we create a WeChat group to discuss some issues of videollama2 and help each other,could you join us? Please contact me: WeChat number == LiangMeng19357260600, phone number == +86 19357260600,e-mail == liangmeng89@zju.edu.cn.
这是来自QQ邮箱的假期自动回复邮件。您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
When i run gradio_web_server_adhoc.py, i met the problem: Segmentation fault (core dumped).
I could successfully run the gradio web on my browser, but when i tried to upload the video and began to chat, the terminal output error: Segmentation fault (core dumped). The same error happened when i tried to run the inference as follows:
import sys sys.path.append('./') from videollama2 import model_init, mm_infer from videollama2.utils import disable_torch_init
def inference(): disable_torch_init()
if name == "main": inference()