xiaowei-hu / CycleGAN-tensorflow

Tensorflow implementation for learning an image-to-image translation without input-output pairs. https://arxiv.org/pdf/1703.10593.pdf
716 stars 294 forks source link

max_size parameter, does it impact training #38

Open mark-joe opened 6 years ago

mark-joe commented 6 years ago

Not really an issue, I'm just puzzled where max_size is used for. It serves as the size of an ImagePool pool and is used to store 'fake outputs'. Used here:

` # Update G network and record fake outputs fake_A, fakeB, , summary_str = self.sess.run( [self.fake_A, self.fake_B, self.g_optim, self.g_sum], feed_dict={self.real_data: batch_images, self.lr: lr}) self.writer.add_summary(summary_str, counter) [fake_A, fake_B] = self.pool([fake_A, fake_B])

            # Update D network
            _, summary_str = self.sess.run(
                [self.d_optim, self.d_sum],
                feed_dict={self.real_data: batch_images,
                           self.fake_A_sample: fake_A,
                           self.fake_B_sample: fake_B,
                           self.lr: lr})
            self.writer.add_summary(summary_str, counter)

` Does the size influence training? Default it is set to 50. Any ideas on this? Thanks!

Deeplearning20 commented 6 years ago

Hello, I have the same question as you. Could you have solved it?

starcream commented 4 years ago

The original cycle_gan paper adopts this idea to keep a image buffer that stores 50 previously created images.Yet I have no idea why it works