Hi, Justn-Tan:
I have 3 questions. Hope for your reply.
Where i read the paper, I found the paper compare the result(visual result with same bpp ) between proposed model and BPG,JPEG. Do you know how to get the arbitrary bpp with BPG and JPEG? And how to realize the PSNR and MS-SSIM measures?
In your code, "trainer.py" L50 "test_handle = sess.run(gan.test_iterator.string_handle())" and L62 "sess.run(gan.test_iterator.initializer, feed_dict=feed_dict_test_init)". Maybe test hander is reduntant in train process? I did not find the process of the test image.
In your code, First update generator and then update discriminator. And i found in "https://github.com/NVIDIA/pix2pixHD/blob/master/train.py" paper《High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs》they also apply this update strategy. But it seems first update discriminator and then update generator is more reasonable. So what is the difference between these two strategies? or these two strategies can get same training result?
These are my questions, looking forward to your reply.
Thanks very much for sharing your code. I have spent much time reading your code and gotten a lot of benefits from your code.
I'm not sure how they compressed their images to a given level with the BPG format unfortunately. The measures you refer to are only applicable in the case where they have the semantic label maps and are implementing selective compression.
Yes, you are right that the test handler is redundant for now, I intend to compute statistics over a test batch in the future.
I would assume that either method would yield very similar results, one interesting direction might be pretraining the generator for a certain number of iterations before reverting to the alternating update schedule.
Hi, Justn-Tan: I have 3 questions. Hope for your reply.