jacobgil / keras-dcgan

Keras implementation of Deep Convolutional Generative Adversarial Networks
974 stars 413 forks source link

Dataset for Training #1

Open divamgupta opened 8 years ago

divamgupta commented 8 years ago

I trained the model on various datasets having more then 20k images, but even after several epochs i'm not getting the desired results.

Can I know the dataset on which this model has been trained and tested?

Thank You

jchen7 commented 8 years ago

Couple questions, divamgupta. Which version of keras did you try it on? Did you try changing anything to get it to work, or did you run the model as is? I am having some trouble training as well.

Thanks

kastnerkyle commented 8 years ago

You might need to closer match DCGAN's original setup - feeding the samples and data as separate minibatches to the discriminator is important. See the line here and just before https://github.com/Newmu/dcgan_code/blob/master/faces/train_uncond_dcgan.py#L138

MinaRe commented 8 years ago

Dear @jacobgil

Thanks for sharing and nice implementation, I want to generate 5 different type gray scale images (on my dataset) the image input should be hdf5 format?

Thanks in advance!

jacobgil commented 8 years ago

@MinaRe The images should be in any format that can be read with opencv - i.e .jpg, .png, .pgm etc.

jacobgil commented 7 years ago

@MinaRe I'm not sure what is the question. One way to load images would to be just use opencv, and load them into numpy arrays. import cv2 img = cv2.imread("/home/MinaRe/img.png", 0) img = cv2.imresize(img, (32, 32))

And then set X_train to be these images. Since you have many images and the RAM space might be a bottleneck, you will probably want to load a next batch of images from disk in the training loop. Hopefully that helps.