Open EvaFlower opened 1 week ago
Hi,
May you provide the file indicated in the train.sh
export TRAIN_JSON="/mnt/aigc_cq/private/amandaaluo/dataset/multi_obj/multi_obj_with_clip_base.json"
.Also, I wonder if you could release your trained model checkpoints for ease of evaluation.
Best,
Hi, we have uploaded our train_json at https://drive.google.com/file/d/1QH_nZWSbPlrpRdtTI-mxzqC7lK2HlON6/view?usp=sharing and for the checkpoint we may release it later.
Hi, we have uploaded our train_json at https://drive.google.com/file/d/1QH_nZWSbPlrpRdtTI-mxzqC7lK2HlON6/view?usp=sharing and for the checkpoint we may release it later.
@amandaluof I really appreciate your follow-up.
I am still getting stuck in reproducing the results of the text2image part. In the multi_obj_with_clip_base.json
, I can see you retrieve data from LAION-625K/improved_aesthetics_6.5plus/processed_data/images/00013_00862.jpg
and LAION/laion_1.2b_subset/images/ec910b35-5b8c-4320-8c4f-7e6a20fe4d11.png
.
Can you give some instructions about how you obtain 120k training image pairs from LAION-5B?
Hi,
May you provide the file indicated in the train.sh
export TRAIN_JSON="/mnt/aigc_cq/private/amandaaluo/dataset/multi_obj/multi_obj_with_clip_base.json"
.Also, I wonder if you could release your trained model checkpoints for ease of evaluation.
Best,