gligen / GLIGEN

Open-Set Grounded Text-to-Image Generation
MIT License
1.92k stars 144 forks source link

Data for training #17

Open jiuntian opened 1 year ago

jiuntian commented 1 year ago

Great work!

Can you release the DATA section that describe how the data for training was prepared?

Thanks.

dolphin0104 commented 1 year ago

me too same question

Yuheng-Li commented 1 year ago

The data has been upload. You can find the description and download links here

Note: we are working on uploading of CC3M and O365 which are relative large dataset. (They should be available shortly after this comment)

guangqianzhang commented 1 year ago

How could i embedding image and text in this project by our own dataset? for example we find image_embedding_before image_embedding_after text... in CC3M tsv files. how should i create this embedding data for own dataset?

Yuheng-Li commented 1 year ago

refer #38

guangqianzhang commented 1 year ago

if we have to prepare my own data for .tsv form data? we prepared image have to transform? Thanks lot !

Naidala commented 9 months ago

Hi, could you confirm that the checkpoints for Generation and Inpainting are trained on a mix of dataset (GoldG,O365,SBU,CC3M). So I could use gligen_inference.py with no need of training?