mbzuai-oryx / Video-ChatGPT

[ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the capabilities of LLMs with a pretrained visual encoder adapted for spatiotemporal video representation. We also introduce a rigorous 'Quantitative Evaluation Benchmarking' for video-based conversational models.
https://mbzuai-oryx.github.io/Video-ChatGPT
Creative Commons Attribution 4.0 International
1.05k stars 93 forks source link

Longer frames issues. #14

Closed wang9danzuishuai closed 1 year ago

wang9danzuishuai commented 1 year ago

In "./video_chatgpt/eval/model_utils.py", line 12

`def load_video(vis_path, n_clips=1, num_frm=100): """ Load video frames from a video file.

Parameters:
vis_path (str): Path to the video file.
n_clips (int): Number of clips to extract from the video. Defaults to 1.
num_frm (int): Number of frames to extract from each clip. Defaults to 100.

`

I just modified the num_frm from 100 to 200, in order to understand longer videos better. But there are some errors occurred as follows:

2023-06-21 16:40:25 | ERROR | stderr | /home/wangpj/anaconda3/envs/fantasy/lib/python3.8/site-packages/transformers/generation/utils.py:1211: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation) 2023-06-21 16:40:25 | ERROR | stderr | warnings.warn( 2023-06-21 16:40:26 | ERROR | stderr | Traceback (most recent call last): 2023-06-21 16:40:26 | ERROR | stderr | File "/home/wangpj/anaconda3/envs/fantasy/lib/python3.8/site-packages/gradio/routes.py", line 394, in run_predict 2023-06-21 16:40:26 | ERROR | stderr | output = await app.get_blocks().process_api( 2023-06-21 16:40:26 | ERROR | stderr | File "/home/wangpj/anaconda3/envs/fantasy/lib/python3.8/site-packages/gradio/blocks.py", line 1075, in process_api 2023-06-21 16:40:26 | ERROR | stderr | result = await self.call_function( 2023-06-21 16:40:26 | ERROR | stderr | File "/home/wangpj/anaconda3/envs/fantasy/lib/python3.8/site-packages/gradio/blocks.py", line 898, in call_function 2023-06-21 16:40:26 | ERROR | stderr | prediction = await anyio.to_thread.run_sync( 2023-06-21 16:40:26 | ERROR | stderr | File "/home/wangpj/anaconda3/envs/fantasy/lib/python3.8/site-packages/anyio/to_thread.py", line 33, in run_sync 2023-06-21 16:40:26 | ERROR | stderr | return await get_asynclib().run_sync_in_worker_thread( 2023-06-21 16:40:26 | ERROR | stderr | File "/home/wangpj/anaconda3/envs/fantasy/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread 2023-06-21 16:40:26 | ERROR | stderr | return await future 2023-06-21 16:40:26 | ERROR | stderr | File "/home/wangpj/anaconda3/envs/fantasy/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 807, in run 2023-06-21 16:40:26 | ERROR | stderr | result = context.run(func, args) 2023-06-21 16:40:26 | ERROR | stderr | File "/home/wangpj/anaconda3/envs/fantasy/lib/python3.8/site-packages/gradio/utils.py", line 549, in async_iteration 2023-06-21 16:40:26 | ERROR | stderr | return next(iterator) 2023-06-21 16:40:26 | ERROR | stderr | File "/home/wangpj/Video-ChatGPT/video_chatgpt/demo/chat.py", line 118, in answer 2023-06-21 16:40:26 | ERROR | stderr | output_ids = self.model.generate( 2023-06-21 16:40:26 | ERROR | stderr | File "/home/wangpj/anaconda3/envs/fantasy/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context 2023-06-21 16:40:26 | ERROR | stderr | return func(args, kwargs) 2023-06-21 16:40:26 | ERROR | stderr | File "/home/wangpj/anaconda3/envs/fantasy/lib/python3.8/site-packages/transformers/generation/utils.py", line 1462, in generate 2023-06-21 16:40:26 | ERROR | stderr | return self.sample( 2023-06-21 16:40:26 | ERROR | stderr | File "/home/wangpj/anaconda3/envs/fantasy/lib/python3.8/site-packages/transformers/generation/utils.py", line 2478, in sample 2023-06-21 16:40:26 | ERROR | stderr | outputs = self( 2023-06-21 16:40:26 | ERROR | stderr | File "/home/wangpj/anaconda3/envs/fantasy/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl 2023-06-21 16:40:26 | ERROR | stderr | return forward_call(*args, *kwargs) 2023-06-21 16:40:26 | ERROR | stderr | File "/home/wangpj/Video-ChatGPT/video_chatgpt/model/video_chatgpt.py", line 191, in forward 2023-06-21 16:40:26 | ERROR | stderr | outputs = self.model( 2023-06-21 16:40:26 | ERROR | stderr | File "/home/wangpj/anaconda3/envs/fantasy/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl 2023-06-21 16:40:26 | ERROR | stderr | return forward_call(args, kwargs) 2023-06-21 16:40:26 | ERROR | stderr | File "/home/wangpj/Video-ChatGPT/video_chatgpt/model/video_chatgpt.py", line 105, in forward 2023-06-21 16:40:26 | ERROR | stderr | if cur_input_ids[video_start_token_pos + num_patches + 1] != self.vision_config.vid_end_token: 2023-06-21 16:40:26 | ERROR | stderr | IndexError: index 523 is out of bounds for dimension 0 with size 429

After many attempts, we still didn't figure out the point. So could you help me to check out this problem? Or is there the right way to detect 200 frames? Thank you!

mmaaz60 commented 1 year ago

Hi @wang9danzuishuai,

Thank you for your interest in our work. Currently the codebase is configured to use 100 frames only from a video. Given 100 video frames, features are extracted using pretrained CLIP model and the -2nd layer outputs are considered. This gives us a tensor of shape t, s, c where t=100, s=256 is the number of spatial tokens and c=1024 is the feature dimensions.

We construct spatiotemporal features as by taking average across t and s dimensions.

temporal features -> average(t, s, c, dim=s) -> (t, c) -> (100, 1025)
spatial features -> average(t, s, c, dim=t) -> (s, c) -> (256 x 1024)

all features = 100 + 256 -> (356 x 1024)

As can be seen from the above explanation, the total number of tokens for 200 frames would be 200 + 256 = 456 and so on.

And in order to implement this change in the code base, we have set the number of total features to 456 at a few places. For example changing 356 to 456 here and changing the padding size accordingly here. Similarly, during training the changes should be made at here as well.

Please let me know if it solves the issue. Thanks

wang9danzuishuai commented 1 year ago

Thanks so much. It works right now. It's very kind of you to help us with this. Wish you happiness every day!

Kratos-Wen commented 1 year ago

Thanks so much. It works right now. It's very kind of you to help us with this. Wish you happiness every day!

Hi @wang9danzuishuai , I'm interesting about your attempt, how does the frame number adjustment work in your tests? Did it improve on understanding longer videos?

wang9danzuishuai commented 1 year ago

@Kratos-Wen Hi, I just followed the instructions given by @mmaaz60, and changed num_frm from 100 to 200. But this change seems had no improvement on understanding longer videos. If a video is about 30 minutes, 100 frames and 200 frames could have no difference. I guess changing the frame extracting method may work well such as extracting 1 frame per second in one video.

onlyonewater commented 2 months ago

@wang9danzuishuai , hi, I also think so. how to change the code to extract the 1 frame or 2 frames per second?