dvlab-research / LLaMA-VID

Official Implementation for LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models
Apache License 2.0
622 stars 39 forks source link

自定义长视频完全跑不了 #54

Closed TotoroDHL closed 4 months ago

TotoroDHL commented 5 months ago

2024-01-10 16:12:37 | INFO | gradio_web_server | ==== request ==== {'model': 'llama-vid-vicuna-7b-long', 'prompt': "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. ", 'movie_part': 0, 'temperature': 0.5, 'top_p': 0.7, 'max_new_tokens': 512, 'stop': '', 'images': 'List of 0 images: []', 'videos': 'List of 0 videos: []'} 2024-01-10 16:12:37 | ERROR | stderr | Traceback (most recent call last): 2024-01-10 16:12:37 | ERROR | stderr | File "/home/HaolingDong/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/routes.py", line 437, in run_predict 2024-01-10 16:12:37 | ERROR | stderr | output = await app.get_blocks().process_api( 2024-01-10 16:12:37 | ERROR | stderr | File "/home/HaolingDong/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/blocks.py", line 1352, in process_api 2024-01-10 16:12:37 | ERROR | stderr | result = await self.call_function( 2024-01-10 16:12:37 | ERROR | stderr | File "/home/HaolingDong/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/blocks.py", line 1093, in call_function 2024-01-10 16:12:37 | ERROR | stderr | prediction = await utils.async_iteration(iterator) 2024-01-10 16:12:37 | ERROR | stderr | File "/home/HaolingDong/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/utils.py", line 341, in async_iteration 2024-01-10 16:12:37 | ERROR | stderr | return await iterator.anext() 2024-01-10 16:12:37 | ERROR | stderr | File "/home/HaolingDong/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/utils.py", line 334, in anext 2024-01-10 16:12:37 | ERROR | stderr | return await anyio.to_thread.run_sync( 2024-01-10 16:12:37 | ERROR | stderr | File "/home/HaolingDong/miniconda3/envs/llamavid/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync 2024-01-10 16:12:37 | ERROR | stderr | return await get_async_backend().run_sync_in_worker_thread( 2024-01-10 16:12:37 | ERROR | stderr | File "/home/HaolingDong/miniconda3/envs/llamavid/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2134, in run_sync_in_worker_thread 2024-01-10 16:12:37 | ERROR | stderr | return await future 2024-01-10 16:12:37 | ERROR | stderr | File "/home/HaolingDong/miniconda3/envs/llamavid/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run 2024-01-10 16:12:37 | ERROR | stderr | result = context.run(func, *args) 2024-01-10 16:12:37 | ERROR | stderr | File "/home/HaolingDong/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/utils.py", line 317, in run_sync_iterator_async 2024-01-10 16:12:37 | ERROR | stderr | return next(iterator) 2024-01-10 16:12:37 | ERROR | stderr | File "/home/HaolingDong/LLaMA-VID/llamavid/serve/gradio_web_server.py", line 228, in http_bot 2024-01-10 16:12:37 | ERROR | stderr | state.messages[-1][-1] = "▌" 2024-01-10 16:12:37 | ERROR | stderr | IndexError: list index out of range ^C2024-01-10 16:13:19 | INFO | stdout | Keyboard interruption in main thread... closing server. ^C2024-01-10 16:13:20 | ERROR | stderr | Traceback (most recent call last): 2024-01-10 16:13:20 | ERROR | stderr | File "/home/HaolingDong/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/blocks.py", line 2058, in block_thread 2024-01-10 16:13:20 | ERROR | stderr | time.sleep(0.1) 2024-01-10 16:13:20 | ERROR | stderr | KeyboardInterrupt 2024-01-10 16:13:20 | ERROR | stderr | 2024-01-10 16:13:20 | ERROR | stderr | During handling of the above exception, another exception occurred: 2024-01-10 16:13:20 | ERROR | stderr | 2024-01-10 16:13:20 | ERROR | stderr | Traceback (most recent call last): 2024-01-10 16:13:20 | ERROR | stderr | File "/home/HaolingDong/miniconda3/envs/llamavid/lib/python3.10/runpy.py", line 196, in _run_module_as_main 2024-01-10 16:13:20 | ERROR | stderr | return _run_code(code, main_globals, None, 2024-01-10 16:13:20 | ERROR | stderr | File "/home/HaolingDong/miniconda3/envs/llamavid/lib/python3.10/runpy.py", line 86, in _run_code 2024-01-10 16:13:20 | ERROR | stderr | exec(code, run_globals) 2024-01-10 16:13:20 | ERROR | stderr | File "/home/HaolingDong/LLaMA-VID/llamavid/serve/gradio_web_server.py", line 476, in 2024-01-10 16:13:20 | ERROR | stderr | demo.queue( 2024-01-10 16:13:20 | ERROR | stderr | File "/home/HaolingDong/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/blocks.py", line 1975, in launch 2024-01-10 16:13:20 | ERROR | stderr | self.block_thread() 2024-01-10 16:13:20 | ERROR | stderr | File "/home/HaolingDong/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/blocks.py", line 2061, in block_thread 2024-01-10 16:13:20 | ERROR | stderr | self.server.close() 2024-01-10 16:13:20 | ERROR | stderr | File "/home/HaolingDong/miniconda3/envs/llamavid/lib/python3.10/site-packages/gradio/networking.py", line 43, in close 2024-01-10 16:13:20 | ERROR | stderr | self.thread.join() 2024-01-10 16:13:20 | ERROR | stderr | File "/home/HaolingDong/miniconda3/envs/llamavid/lib/python3.10/threading.py", line 1096, in join 2024-01-10 16:13:20 | ERROR | stderr | self._wait_for_tstate_lock() 2024-01-10 16:13:20 | ERROR | stderr | File "/home/HaolingDong/miniconda3/envs/llamavid/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock 2024-01-10 16:13:20 | ERROR | stderr | if lock.acquire(block, timeout): 2024-01-10 16:13:20 | ERROR | stderr | KeyboardInterrupt ^C

首先就是视频经常传不上去,视频的list一直是0, 然后就是传上去之后又显示 2024-01-10 15:48:13 | INFO | stdout | Caught Unknown Error [15:48:13] /github/workspace/src/video/video_reader.cc:444: [/tmp/gradio_dhl/2a744ede2c3fc2c7dff925ee6479710723639a65/sdl.mov]Unable to handle EOF because it takes too long to retrieve last few frames and DECORD_EOF_RETRY_MAX=10240. You may override the limit by export DECORD_EOF_RETRY_MAX=20480 for example to allow more EOF retry attempts 按提示修改之后视频就又上传不了了。 model_worker_short中frame_idx也已经修改了还是不行……

可以本地部署吗?有什么注意事项吗?

yanwei-li commented 5 months ago

Hi, it seems you need to convert the sdl.mov to sdl.mp4 for the demo. And in the online demo, we set the time limit to 3 min because of the computational limit. For the offline demo, you can extend the limit according to your resources.