First and foremost, thank you for developing such an amazing model. VideoLLaMA 2 has greatly advanced the field of video understanding, and your efforts are truly appreciated.
I have a question regarding the flexibility of frame processing in VideoLLaMA 2. Specifically, I am interested in understanding if there is a way to vary the number of frames processed by the model using the code provided in the Hugging Face repository.
Could you please provide guidance on how to achieve this? Is there an existing parameter or method within the Hugging Face codebase that allows for such customization?
Thank you again for your hard work and dedication.
I have found a way to specify the number of frames processed by the model from a video. The process_video function has an argument num_frames that allows you to achieve this.
First and foremost, thank you for developing such an amazing model. VideoLLaMA 2 has greatly advanced the field of video understanding, and your efforts are truly appreciated.
I have a question regarding the flexibility of frame processing in VideoLLaMA 2. Specifically, I am interested in understanding if there is a way to vary the number of frames processed by the model using the code provided in the Hugging Face repository.
Could you please provide guidance on how to achieve this? Is there an existing parameter or method within the Hugging Face codebase that allows for such customization?
Thank you again for your hard work and dedication.