zsdonghao / text-to-image

Generative Adversarial Text to Image Synthesis / Please Star -->
https://github.com/zsdonghao/tensorlayer
599 stars 162 forks source link

For our own dataset #15

Open Better-Boy opened 6 years ago

Better-Boy commented 6 years ago

How to apply the same architecture to our own dataset with images and captions? Please give instructions

314rated commented 6 years ago

Yes, this info shall be greatly useful. Thanks

zsdonghao commented 6 years ago

For customised dataset, we need to prepare the data to fit the same format here : https://github.com/zsdonghao/text-to-image/blob/master/data_loader.py#L166

BTW, this is the steps to create the vocabulary: https://github.com/wagamamaz/tensorlayer-tricks/blob/master/README.md#9-sentences-tokenization

Better-Boy commented 6 years ago

I went through the code of data_loader.py. Here, in this code, no use of ".t7" files was encountered by me. Only arrange your dataset with all the images in one directory and the text descriptions in sync with the file names in another directory with class name as the directory name. All the directories containing text descriptions, should be under one parent directory called "text_c10". Then, run the data _loader.py. You will get the output as required to train the model.

If any mistake in my understanding, please correct it