I have trained to generate 128 by 128 images. The generated samples with fixed noise samples look good after certain iterations. However, when using random noise, the generated samples have a lot of artifacts. What are the possible reasons for that? I used the following code to test the model. Thanks for your tips.
# generate new samples one by one
if _iteration==20000:
index_sample=0
TEST_BATCH_SIZE=1
while index_sample<20000:
random_noise = tf.constant(np.random.normal(size=(TEST_BATCH_SIZE, 128)).astype('float32'))
random_noise_samples = Generator(1, noise=random_noise)
samples = session.run(random_noise_samples)
samples = ((samples+1.)*(255.99/2)).astype('int32')
lib.save_images.save_images(samples.reshape((TEST_BATCH_SIZE, 3, 128, 128)), 'new_samples/samples_{}.png'.format(index_sample))
I have trained to generate 128 by 128 images. The generated samples with fixed noise samples look good after certain iterations. However, when using random noise, the generated samples have a lot of artifacts. What are the possible reasons for that? I used the following code to test the model. Thanks for your tips.