lyuchenyang / Macaw-LLM

Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration
Apache License 2.0
1.56k stars 127 forks source link

Questions about the files - which files to download #13

Closed chatsci closed 1 year ago

chatsci commented 1 year ago

Thanks for the cool project! I have two questions:

  1. which files exactly we should download? In the COCO, VQA, etc. datasets, there are many files. However, I believe only a part of them are needed. For example, I downloaded the following:

Stage 1:

1. Download the COCO image dataset (2014 Train images [83K/13GB]) from: https://cocodataset.org/#download, unzip to current folder (train2014/).

2. Download the Macaw dataset: https://github.com/lyuchenyang/Macaw-LLM/blob/main/data/generated_examples_coco.json

3. Download the Macaw dataset: https://github.com/lyuchenyang/Macaw-LLM/blob/main/data/generated_examples_avsd.json

4. Download the Charades video dataset (Data (scaled to 480p, 13 GB)) from: https://prior.allenai.org/projects/charades, unzip to current folder (Charades_v1_480/).

5. In the current folder, create a folder named "avsd/". In "./avsd/", create "./avsd/videos/", "./avsd/audios/", and "./avsd/images/". Move all the videos from "Charades_v1_480/" to "./avsd/videos/".

6. In the current folder, create a folder named "coco/". In "./coco/", create "./coco/images/". Move all the images from "train2014/" to "./coco/images/".

Stage 2:

1. From https://visualqa.org/download.html download "Training annotations 2017 v2.0", "Validation annotations 2017 v2.0", "Training questions 2017 v2.0", "Validation questions 2017 v2.0". Put them in "./vqa/" and unzip.

2. From https://video-dialog.com/ download AVSD Dataset (4 files), put them into "./avsd/".

But I'm not sure whether it is all we needs.

  1. In the combine_visual_and_audio_names(): of preprocessing supervised python script, there is a:

def add_image_names(dir=None): all_examples = json_load(dir)['annotations']

    for ind, e in enumerate(tqdm(all_examples)):

        _image_dir = e['image_path']
        if len(_image_dir.split('_')[-1].split('.')[0]) < 12:
            i_str = _image_dir.split('_')[-1].split('.')[0]
            n_str = '0' * (12 - len(i_str)) + i_str
            _image_dir = _image_dir.replace(i_str, n_str)

However, I can't find any "image_path" field in any of the above json files.

Looking forward to your answer. Thank you.

chatsci commented 1 year ago

Also, several functions are never called in the script, e.g.,: preprocess_vqa2_to_val_dataset() preprocess_avsd_to_val_dataset() resize_image()

Are these functions abandoned, or they are called but not illustrated in the python script?

chatsci commented 1 year ago

Also, in preprocess_avsd_to_tensor_dataset(), the audios and frames extracted from videos are actually not utilized. Is this correct?

chatsci commented 1 year ago

My last question is regarding the filtering: in preprocess_avsd_to_tensor_dataset(), we don't have a filtering step based on the input text length. In other preprocess functions (for vqa and alpaca), we have filtering steps based on the length of input text without answer. Why we filter based on the input text rather than the full_text? And why the avsd preprocessing don't have a filter step?

lyuchenyang commented 1 year ago

Hi, sorry for the late reply. For downloading corresponding files, they are also used in preprocessing steps (vqa and avsd dataset for supervised data and our instruction dataset for unsupervised data). The metadata and raw images/videos of vqa and avsd are used in supervised data whereas only raw images/videos of vqa and avsd are used in unsupervised data with our instruction data. The image_path is the directory to the coco images.

lyuchenyang commented 1 year ago

Also, several functions are never called in the script, e.g.,: preprocess_vqa2_to_val_dataset() preprocess_avsd_to_val_dataset() resize_image()

Are these functions abandoned, or they are called but not illustrated in the python script?

The first two functions are used to process validation dataset for evaluation and inference. The last one is used to resize images as well as video frames.

lyuchenyang commented 1 year ago

Also, in preprocess_avsd_to_tensor_dataset(), the audios and frames extracted from videos are actually not utilized. Is this correct?

Usually we do not directly store image pixels in tensor dataset as it would incur significant memory use, instead we keep the index of the image and load it during training.

lyuchenyang commented 1 year ago

My last question is regarding the filtering: in preprocess_avsd_to_tensor_dataset(), we don't have a filtering step based on the input text length. In other preprocess functions (for vqa and alpaca), we have filtering steps based on the length of input text without answer. Why we filter based on the input text rather than the full_text? And why the avsd preprocessing don't have a filter step?

This is because when the instruction is longer than the maximum length then no response will be included in the sequence, thus model can not learn how to generate response.