HaozheZhao / MIC

MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU
335 stars 15 forks source link

How to train the model using MIC_sampled? #20

Open ElegantLin opened 1 year ago

ElegantLin commented 1 year ago

Hi Haozhe,

Thanks for your great work. I have downloaded the MIC_sampled dataset. I found the image base64 code, which I assume is attached. So, I should be able to train it without any other external data. I also checked the training shell. I found there is train_file in https://github.com/HaozheZhao/MIC/blob/master/run_script/flickr/deep_speed_blip2_t5xl.sh#L27. I don't know how to generate this. Could you please tell us how to train the model with the MIC_sampled dataset?

Thanks!

HaozheZhao commented 1 year ago

Hello, the training file in the script is created using our MIC repo. This is a tool we employ to transform open-source datasets into our preferred MIC dataset, complete with our designed context schema.

You can explore the repository, particularly the data_preprocess_save_arrow.py script. We utilize this script to preprocess the dataset and store the data in arrow files, which can be loaded as the Dataset class from the Huggingface datasets. Integrating this into the Huggingface trainer codebase is a straightforward procedure.