Open samhains opened 7 years ago
I think I figured it out, turns out that the visualise method used in generating samples with an existing model uses random uniform z_values from 0.5 to -0.5, however in the training samples generator the z_values are random values 1.0 to -1.0.
see here: https://github.com/carpedm20/DCGAN-tensorflow/blob/master/utils.py#L175
@sdlovecraft I only get okay results when setting OPTION=0 https://github.com/carpedm20/DCGAN-tensorflow/blob/master/main.py#L93
and yes, they get better when setting the random values to 1.0 to -1.0!
Me too, option=1 will generate 100 samples but none of them are meaningful!
@parkerzf hey are you able to generate more than 100 images?? if yes, can you please help me.
I have a dataset of around 110k images, all cropped to 200x200 resolution. They are a collection of instagram photos, very varied, with a large amount of 'randomness' in the dataset.
I'm really happy with the samples being generated during training, especially from around the 6th epoch onwards. However, generated samples don't look right, they look like a faded, smoothed out version of the samples generated around about the 2nd epoch during training.
I trained using this command to train:
python main.py --dataset=instagram --is_train --input_height=200 --input_width=200 --output_height=200 --output_width=200
and am generating samples with this command:
python main.py --dataset=instagram --input_height=200 --input_width=200 --output_height=200 --output_width=200
I have tried all of the visualisation options.
I'm not entirely sure how to interpret the loss function but I have attached some logs from the training output.
TrainingLogs.txt
My problem might be related to https://github.com/carpedm20/DCGAN-tensorflow/issues/139