I am trying Video-LLaVA on Ubuntu 22.04 on WSL2.
I have been able to try inference with the CLI, but not with the gradio web server.
I started gradio_web_server with the following command.
It seems to be started successfully, although a slightly worrisome message is displayed.
$python -m videollava.serve.gradio_web_server
=== Details omitted ===
/home/nvidia/Video-LLaVA/videollava/serve/gradio_web_server.py:175: GradioUnusedKwargWarning: You have unused kwarg parameters in Chatbot, please remove them: {'bubble_full_width': True}
chatbot = gr.Chatbot(label="Video-LLaVA", bubble_full_width=True).style(height=750)
/home/nvidia/Video-LLaVA/videollava/serve/gradio_web_server.py:175: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
chatbot = gr.Chatbot(label="Video-LLaVA", bubble_full_width=True).style(height=750)
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
IMPORTANT: You are using gradio version 3.37.0, however version 4.44.1 is available, please upgrade.
--------
After starting gradio_web_server, I start a browser on the host OS and access “http://127.0.0.1:7860” but cannot view it.
Is there any missing or incorrect procedure?
The GPU is an RTX-4080 (16GB), so I think I have enough memory.
I am trying Video-LLaVA on Ubuntu 22.04 on WSL2. I have been able to try inference with the CLI, but not with the gradio web server.
I started gradio_web_server with the following command. It seems to be started successfully, although a slightly worrisome message is displayed.
After starting gradio_web_server, I start a browser on the host OS and access “http://127.0.0.1:7860” but cannot view it. Is there any missing or incorrect procedure?
The GPU is an RTX-4080 (16GB), so I think I have enough memory.
Python : 3.10 Pytorch : 2.0.1 CUDA Version : 11.8