Justin-Tan / generative-compression

TensorFlow Implementation of Generative Adversarial Networks for Extreme Learned Image Compression
MIT License
511 stars 108 forks source link

How to compare the result between proposed model and BPG,JPEG? #17

Closed Haijunlv closed 5 years ago

Haijunlv commented 6 years ago

Hi, Justn-Tan: I have 3 questions. Hope for your reply.

  1. Where i read the paper, I found the paper compare the result(visual result with same bpp ) between proposed model and BPG,JPEG. Do you know how to get the arbitrary bpp with BPG and JPEG? And how to realize the PSNR and MS-SSIM measures?
    1. In your code, "trainer.py" L50 "test_handle = sess.run(gan.test_iterator.string_handle())" and L62 "sess.run(gan.test_iterator.initializer, feed_dict=feed_dict_test_init)". Maybe test hander is reduntant in train process? I did not find the process of the test image.
    2. In your code, First update generator and then update discriminator. And i found in "https://github.com/NVIDIA/pix2pixHD/blob/master/train.py" paper《High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs》they also apply this update strategy. But it seems first update discriminator and then update generator is more reasonable. So what is the difference between these two strategies? or these two strategies can get same training result? These are my questions, looking forward to your reply. Thanks very much for sharing your code. I have spent much time reading your code and gotten a lot of benefits from your code.
Justin-Tan commented 6 years ago
  1. I'm not sure how they compressed their images to a given level with the BPG format unfortunately. The measures you refer to are only applicable in the case where they have the semantic label maps and are implementing selective compression.
  2. Yes, you are right that the test handler is redundant for now, I intend to compute statistics over a test batch in the future.
  3. I would assume that either method would yield very similar results, one interesting direction might be pretraining the generator for a certain number of iterations before reverting to the alternating update schedule.