dvlab-research / LLaMA-VID

LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)
Apache License 2.0
693 stars 43 forks source link

gradio wrong #24

Closed QiSu77 closed 9 months ago

QiSu77 commented 9 months ago

Great work! I met an issue that :

2023-12-22 10:43:54 | ERROR | stderr | return await anyio.to_thread.run_sync( 2023-12-22 10:43:54 | ERROR | stderr | File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync 2023-12-22 10:43:54 | ERROR | stderr | return await get_asynclib().run_sync_in_worker_thread( 2023-12-22 10:43:54 | ERROR | stderr | File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread 2023-12-22 10:43:54 | ERROR | stderr | return await future 2023-12-22 10:43:54 | ERROR | stderr | File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run 2023-12-22 10:43:54 | ERROR | stderr | result = context.run(func, *args) 2023-12-22 10:43:54 | ERROR | stderr | File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/utils.py", line 317, in run_sync_iterator_async 2023-12-22 10:43:54 | ERROR | stderr | return next(iterator) 2023-12-22 10:43:54 | ERROR | stderr | File "/home/deepspeed/multimodal/LLaMA-VID/llamavid/serve/gradio_web_server.py", line 226, in http_bot 2023-12-22 10:43:54 | ERROR | stderr | state.messages[-1][-1] = "▌" 2023-12-22 10:43:54 | ERROR | stderr | IndexError: list index out of range

I think maybe it it something wrong with the code? I met this when I using inference on the Gradio and trying long video.

yanwei-li commented 9 months ago

Hi, we updated the instructions for Gradio in README->Gradio Web UI. I tried the code and it works well without such error. Please retry the demo following the instructions. Welcome to report if the error is still here.

QiSu77 commented 9 months ago

@yanwei-li Thanks for your kindness. You are so friendly! But I still have the problem. I will describe it in more details.

The problem happens when I trying any video which is more than 1min. And at the same time, I also find the problem in your demo page. Whenever I use a video more than 1min, your demo page will be carshed. I think it is as the same problem as I met. The whole error is as follows: """ 2023-12-22 19:40:15 | ERROR | asyncio | Task exception was never retrieved future: <Task finished name='7c3zqxjokk8_7' coro=<Queue.process_events() done, defined at /home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/queueing.py:342> exception=WebSocketDisconnect(<CloseCode.ABNORMAL_CLOSURE: 1006>)> Traceback (most recent call last): File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/queueing.py", line 346, in process_events client_awake = await self.gather_event_data(event) File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/queueing.py", line 219, in gather_event_data data, client_awake = await self.get_message(event, timeout=receive_timeout) File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/queueing.py", line 445, in get_message data = await asyncio.wait_for( File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/asyncio/tasks.py", line 445, in wait_for return fut.result() File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/starlette/websockets.py", line 133, in receive_json self._raise_on_disconnect(message) File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/starlette/websockets.py", line 105, in _raise_on_disconnect raise WebSocketDisconnect(message["code"]) starlette.websockets.WebSocketDisconnect: CloseCode.ABNORMAL_CLOSURE 2023-12-22 19:40:15 | INFO | gradio_web_server | http_bot. ip: 172.31.36.106 2023-12-22 19:40:15 | INFO | gradio_web_server | model_name: llama-vid-7b-full-224-video-fps-1, worker_addr: http://localhost:40001 2023-12-22 19:40:15 | INFO | gradio_web_server | ==== request ==== {'model': 'llama-vid-7b-full-224-video-fps-1', 'prompt': "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. ", 'movie_part': 0, 'temperature': 0.5, 'top_p': 0.7, 'max_new_tokens': 512, 'stop': '', 'images': 'List of 0 images: []', 'videos': 'List of 0 videos: []'} 2023-12-22 19:40:15 | ERROR | stderr | Traceback (most recent call last): 2023-12-22 19:40:15 | ERROR | stderr | File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/routes.py", line 437, in run_predict 2023-12-22 19:40:15 | ERROR | stderr | output = await app.get_blocks().process_api( 2023-12-22 19:40:15 | ERROR | stderr | File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/blocks.py", line 1352, in process_api 2023-12-22 19:40:15 | ERROR | stderr | result = await self.call_function( 2023-12-22 19:40:15 | ERROR | stderr | File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/blocks.py", line 1093, in call_function 2023-12-22 19:40:15 | ERROR | stderr | prediction = await utils.async_iteration(iterator) 2023-12-22 19:40:15 | ERROR | stderr | File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/utils.py", line 341, in async_iteration 2023-12-22 19:40:15 | ERROR | stderr | return await iterator.anext() 2023-12-22 19:40:15 | ERROR | stderr | File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/utils.py", line 334, in anext 2023-12-22 19:40:15 | ERROR | stderr | return await anyio.to_thread.run_sync( 2023-12-22 19:40:15 | ERROR | stderr | File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync 2023-12-22 19:40:15 | ERROR | stderr | return await get_asynclib().run_sync_in_worker_thread( 2023-12-22 19:40:15 | ERROR | stderr | File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread 2023-12-22 19:40:15 | ERROR | stderr | return await future 2023-12-22 19:40:15 | ERROR | stderr | File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run 2023-12-22 19:40:15 | ERROR | stderr | result = context.run(func, *args) 2023-12-22 19:40:15 | ERROR | stderr | File "/home/deepspeed/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/utils.py", line 317, in run_sync_iterator_async 2023-12-22 19:40:15 | ERROR | stderr | return next(iterator) 2023-12-22 19:40:15 | ERROR | stderr | File "/home/deepspeed/multimodal/LLaMA-VID/llamavid/serve/gradio_web_server.py", line 226, in http_bot 2023-12-22 19:40:15 | ERROR | stderr | state.messages[-1][-1] = "▌" 2023-12-22 19:40:15 | ERROR | stderr | IndexError: list index out of range """

What I used is "python -m llamavid.serve.model_worker_short --host 0.0.0.0 --controller http://localhost:10000 --port 40001 --worker http://localhost:40001 --model-path work_dirs/llama-vid/llama-vid-7b-full-224-video-fps-1"

I do hope what I meet can help you make this fantastic project better! Thanks a lot!

yanwei-li commented 9 months ago

Same issue with #25