carpedm20 / DCGAN-tensorflow

A tensorflow implementation of "Deep Convolutional Generative Adversarial Networks"
http://carpedm20.github.io/faces/
MIT License
7.15k stars 2.63k forks source link

generate specified picture not successful #85

Open shartoo opened 7 years ago

shartoo commented 7 years ago

I want to generate some picture with specified feature ,like wear hat,big nose(these are within labels in celaba data), so i changed the code. The code now processes like multi-classes task,though it runs successfully and get some result,the result seems bad. Like below(try to generate pictures with wear hat):

test_arange_99

As you can see,some blurred features appear.But the program cannot step further,cause the loss function of discriminator goes zero in very early while generator stay almost unchanged

qq 20170117100905

the above output comes from my program training stage,with 80000 pictures cropped from celea, 200 epoches later(the pictures shows from 0 epoches,which just for show) .How could i modify the code to generate picture with specified feature ? redefining the loss function?

carpedm20 commented 7 years ago

So if you changed the code to train celebA with conditioning on different labels, it highly depends on how many labels are there and how they are formed. How did you changed the code? By the way, this issue doesn't mean that celebA training is not successful with default code and command, isn't it?

shartoo commented 7 years ago

Yes,the default code is ok to go. I want to make some deeper explore. If the dataset is not mnist, your code goes the route without any label(all images with no label equal to belong to one label),namely y_dim is None in the code.
I linked every image and its label in train method in every keypoint, like :

from line 147-156

if config.dataset == 'mnist':
            sample_images = data_X[0:self.sample_size]
            sample_labels = data_y[0:self.sample_size]
        else:
            sample_files = data[0:self.sample_size]
            sample = [get_image(sample_file, self.image_size, is_crop=self.is_crop, resize_w=self.output_size, is_grayscale = self.is_grayscale) for sample_file in sample_files]
            if (self.is_grayscale):
                sample_images = np.array(sample).astype(np.float32)[:, :, :, None]
            else:
                sample_images = np.array(sample).astype(np.float32)

to

if config.dataset == 'mnist':
            sample_images = data_X[0:self.sample_size]
            sample_labels = data_y[0:self.sample_size]
        else:
            sample_files = data[0:self.sample_size]
            sample = [get_image(sample_file, self.image_size, is_crop=self.is_crop, resize_w=self.output_size,
                                is_grayscale=self.is_grayscale) for sample_file in sample_files]
            batch_labels=self.get_batach_label(sample_files)
            if (self.is_grayscale):
                sample_images = np.array(sample).astype(np.float32)[:, :, :, None]
            else:
                sample_images = np.array(sample).astype(np.float32)

and all G_loss ,D_loss are attached with batch_label like

 _, summary_str = self.sess.run([g_optim, self.g_sum],feed_dict={self.z: batch_z, self.y: batch_labels})

i wrote the get_batach_label method to utils.py .


To generate sample image with specified features, the random data in visualize method should be changed into fixed.

from utils.py

elif option == 1:
    values = np.arange(0, 1, 1./config.batch_size)
    for idx in range(100):
      print(" [*] %d" % idx)
      #z_sample = np.zeros([config.batch_size, dcgan.z_dim])
      z_sample = np.random.uniform(-0.5, 0.5, size=(config.batch_size, dcgan.z_dim))
      for kdx, z in enumerate(z_sample):
        z[idx] = values[kdx]

      # generate Wearing_hat,33 is the index of Wearing_hat feature in label list
      y = 33*np.ones(config.batch_size,dtype=int)
      y_one_hot = np.zeros((config.batch_size, config.y_dim))
      y_one_hot[np.arange(config.batch_size), y] = 1
      samples = sess.run(dcgan.sampler, feed_dict={dcgan.z: z_sample, dcgan.y: y_one_hot})
      save_images(samples, [pic_line, 8], './samples/test_arange_%s.png' % (idx))
xiaoming-qxm commented 7 years ago

@shartoo Have you got a better result now for "Wearing hat"?

shartoo commented 7 years ago

@daoliker No,i didn't made research on this topic any more.I'll try sometime later.

ttdbb1 commented 5 years ago

Have you got a better result now for "Wearing hat"?