cg1177 / VideoLLM

VideoLLM: Modeling Video Sequence with Large Language Models
Apache License 2.0
154 stars 3 forks source link

Preprint typo and the possibility of token-retrieval pretraining #1

Closed EIFY closed 1 year ago

EIFY commented 1 year ago

Hi! I just went through your preprint, and here are my two quick reactions, if you don't mind:

Typo in the Figure 3 caption of the preprint

(b) “Unssen tokens” are data units that have not yet arrived, and predicting their attributes or when they appear in the future usually belongs to future prediction tasks.

It should be “Unseen tokens”.

Possibility of token-retrieval pretraining

VideoLLM, especially the use of linear projector to map video tokens to tokens for the LLM, reminds me of https://github.com/kohjingyu/fromage. However, there is no equivalent of the Image-text retrieval pretraining task: i.e., given a description of the video, train the LLM to retrieve the correct video tokens in the correct order among all the video tokens in the same batch. Can it be a useful pretraining task here?

cg1177 commented 1 year ago

Hello. Thanks for your reminder. Regarding the pre-training method you mentioned, I have had similar thoughts before. My previous idea was to generate sequences by learning autoregressive retrieval on a larger dataset, thereby expanding the existing downstream sequence samples. If this method is used for pre-training, verifying its feasibility through experiments is still necessary. We may conduct related experiments in the future.

EIFY commented 1 year ago

Hey @cg1177, just curious:

  1. What kind of retrieval method did you have in mind?
  2. I have been thinking about multimodal LLMs for a while. Is it possible for me to join the effort? Here are my thoughts on this topic.