Closed chatsci closed 1 year ago
Also, several functions are never called in the script, e.g.,: preprocess_vqa2_to_val_dataset() preprocess_avsd_to_val_dataset() resize_image()
Are these functions abandoned, or they are called but not illustrated in the python script?
Also, in preprocess_avsd_to_tensor_dataset(), the audios and frames extracted from videos are actually not utilized. Is this correct?
My last question is regarding the filtering: in preprocess_avsd_to_tensor_dataset(), we don't have a filtering step based on the input text length. In other preprocess functions (for vqa and alpaca), we have filtering steps based on the length of input text without answer. Why we filter based on the input text rather than the full_text? And why the avsd preprocessing don't have a filter step?
Hi, sorry for the late reply. For downloading corresponding files, they are also used in preprocessing steps (vqa and avsd dataset for supervised data and our instruction dataset for unsupervised data). The metadata and raw images/videos of vqa and avsd are used in supervised data whereas only raw images/videos of vqa and avsd are used in unsupervised data with our instruction data. The image_path
is the directory to the coco images.
Also, several functions are never called in the script, e.g.,: preprocess_vqa2_to_val_dataset() preprocess_avsd_to_val_dataset() resize_image()
Are these functions abandoned, or they are called but not illustrated in the python script?
The first two functions are used to process validation dataset for evaluation and inference. The last one is used to resize images as well as video frames.
Also, in preprocess_avsd_to_tensor_dataset(), the audios and frames extracted from videos are actually not utilized. Is this correct?
Usually we do not directly store image pixels in tensor dataset as it would incur significant memory use, instead we keep the index of the image and load it during training.
My last question is regarding the filtering: in preprocess_avsd_to_tensor_dataset(), we don't have a filtering step based on the input text length. In other preprocess functions (for vqa and alpaca), we have filtering steps based on the length of input text without answer. Why we filter based on the input text rather than the full_text? And why the avsd preprocessing don't have a filter step?
This is because when the instruction is longer than the maximum length then no response will be included in the sequence, thus model can not learn how to generate response.
Thanks for the cool project! I have two questions:
Stage 1:
1. Download the COCO image dataset (2014 Train images [83K/13GB]) from: https://cocodataset.org/#download, unzip to current folder (train2014/).
2. Download the Macaw dataset: https://github.com/lyuchenyang/Macaw-LLM/blob/main/data/generated_examples_coco.json
3. Download the Macaw dataset: https://github.com/lyuchenyang/Macaw-LLM/blob/main/data/generated_examples_avsd.json
4. Download the Charades video dataset (Data (scaled to 480p, 13 GB)) from: https://prior.allenai.org/projects/charades, unzip to current folder (Charades_v1_480/).
5. In the current folder, create a folder named "avsd/". In "./avsd/", create "./avsd/videos/", "./avsd/audios/", and "./avsd/images/". Move all the videos from "Charades_v1_480/" to "./avsd/videos/".
6. In the current folder, create a folder named "coco/". In "./coco/", create "./coco/images/". Move all the images from "train2014/" to "./coco/images/".
Stage 2:
1. From https://visualqa.org/download.html download "Training annotations 2017 v2.0", "Validation annotations 2017 v2.0", "Training questions 2017 v2.0", "Validation questions 2017 v2.0". Put them in "./vqa/" and unzip.
2. From https://video-dialog.com/ download AVSD Dataset (4 files), put them into "./avsd/".
But I'm not sure whether it is all we needs.
def add_image_names(dir=None): all_examples = json_load(dir)['annotations']
However, I can't find any "image_path" field in any of the above json files.
Looking forward to your answer. Thank you.