AILab-CVC / SEED-X

Multimodal Models in Real World
Other
370 stars 17 forks source link

Data structure #16

Open floppycracken opened 2 months ago

floppycracken commented 2 months ago

Thanks for the great work and sharing the code. I wanted to ask about how to prepare the data for the case, the output from the model is both image and text. This would be similar to the case of seedx-ppt model.

geyuying commented 1 month ago

Hi, we currently support the following dataloader with the specified data structure.

For "build_llava_jsonl_datapipes" dataloader, each folder stores a number of jsonl files, each jsonl file contains 10K pieces of content, with an example of the content as follows:

{"image": "coco/train2017/000000033471.jpg", "data": ["What are the colors of the bus in the image?", "The bus in the image is white and red.", "What feature can be seen on the back of the bus?", "The back of the bus features an advertisement.", "Is the bus driving down the street or pulled off to the side?", "The bus is driving down the street, which is crowded with people and other vehicles."]}

For "build_caption_datapipes_with_pixels" dataloder, each folder stores a number of .tar files and reads image-text pairs in the form of webdataset.

For "build_single_turn_edit_datapipes" dataloder, each folder stores a number of jsonl files, each jsonl file contains 10K pieces of content, with an example of the content as follows:


{"source_image": "source_images/f6f4d0669694df5b.jpg", "target_image": "target_images/f6f4d0669694df5b.jpg", "instruction": "Erase the car that is parked in front of the Roebuck building."}