Justin-Tan / generative-compression

TensorFlow Implementation of Generative Adversarial Networks for Extreme Learned Image Compression
MIT License
511 stars 108 forks source link

The testset always can not be reconstructed #38

Closed imsleepy711 closed 3 years ago

imsleepy711 commented 3 years ago

Hi Justin, thanks for your codes. Here is my question. I used another train set for training, about 1400 pictures. And I set the batch size to 1, epochs to 400, the number of steps is close to 21w. I found that the generator loss is stable around 40. Then I stopped and tested. The most of the train set have the good reconstruction effect, but most of the test set have no good result, like this photo. I tried to train a few times and got the same result——the test set can not be reconstructed. Please do you have any advice? Thank you! buildings06_compressed

Justin-Tan commented 3 years ago

The training set is probably too small - you can either try finetuning from a pretrained model on a larger dataset or use the approach in this newer repo: https://github.com/Justin-Tan/high-fidelity-generative-compression - finetuning on the pretrained models there should work well. Note that the bitrate for the 'high-fidelity' models is generally higher than the models in this repo, but with better reconstruction quality.

imsleepy711 commented 3 years ago

Thank you! I will try your advice.