Closed yliu-cs closed 1 year ago
Hi, thank you for your interest in our work. (1) Our videos are a subset of hdvila100m, please refer to the instruction of hdvila100m and download the original videos and cut them into clips. In order to reduce the data reading overhead, we have compressed the raw video clips to 3fps and 240p. (2) For datasets/lfvila_data/pretrain/train_db and datasets/lfvila_data/pretrain/val.jsonl, it consists of lists of [clip_id, text] pairs built from the metadata of hdvila100m. We combine pairs that are contiguous in time to form long videos. We provide the clipids of LF-VILA-8M, the text can be found in the metadata of hdvila100m. Here is an example:
[
'tS3XvuOhbNo.14.mp4': text1,
'tS3XvuOhbNo.15.mp4': text2,
...,
'tS3XvuOhbNo.21.mp4': text3,
]
Also, please refer to #7 for transcripts of HD-VILA-100M.
What is the data format in
datasets/hdvila100m/video_clip_3fps
,datasets/lfvila_data/pretrain/train_db
anddatasets/lfvila_data/pretrain/val.jsonl
mentioned insrc/configs/pretrain_stage1.yaml
? Can you provide specific reference examples or processes (for long-form video and annotations respectively) ?