Not really an issue, I'm just puzzled where max_size is used for. It serves as the size of an ImagePool pool and is used to store 'fake outputs'. Used here:
` # Update G network and record fake outputs
fake_A, fakeB, , summary_str = self.sess.run(
[self.fake_A, self.fake_B, self.g_optim, self.g_sum],
feed_dict={self.real_data: batch_images, self.lr: lr})
self.writer.add_summary(summary_str, counter)
[fake_A, fake_B] = self.pool([fake_A, fake_B])
Not really an issue, I'm just puzzled where max_size is used for. It serves as the size of an ImagePool pool and is used to store 'fake outputs'. Used here:
` # Update G network and record fake outputs fake_A, fakeB, , summary_str = self.sess.run( [self.fake_A, self.fake_B, self.g_optim, self.g_sum], feed_dict={self.real_data: batch_images, self.lr: lr}) self.writer.add_summary(summary_str, counter) [fake_A, fake_B] = self.pool([fake_A, fake_B])
` Does the size influence training? Default it is set to 50. Any ideas on this? Thanks!