LynnHo / AttGAN-Tensorflow

AttGAN: Facial Attribute Editing by Only Changing What You Want (IEEE TIP 2019)
MIT License
605 stars 135 forks source link

Hi, I have a question about the training and the test #39

Closed FriedRonaldo closed 4 years ago

FriedRonaldo commented 4 years ago

First, I appreciate your excellent work and have been interested in your work since 2018.

I have a question about the test and training in your work. In advance, I clarify that I consider the case where the value of attributes is binary.

For training, the value of attributes seems to be -1 or 1. (Read 0 or 1 then, *2 - 1 -> [-1, 1]) (https://github.com/LynnHo/AttGAN-Tensorflow/blob/master/train.py#L161)

On the other hand, the range of attributes is [-2, 2] for test. ( Read 0 or 1 then, 2 - 1 -> [-1, 1], finally 2 -> [-2, 2] ) (https://github.com/LynnHo/AttGAN-Tensorflow/blob/master/train.py#L246, test_int = 2.0)

Is it right that you use the different values of the attribute vector in training and test?

I just find that I cannot reproduce the result of attribute classification without this trick. However, I can reproduce the result by using [-2, 2].

Thanks!

CastellanLiu commented 4 years ago

It's a trade-off between image quality and attribute generation accuracy.

FriedRonaldo commented 4 years ago

@CastellanLiu Thanks for the reply. You mean that this code uses the trick for quantitative evaluation, right? Even though your kind reply, I just want to check it with the author.


I just notice that you might be the author of STGAN :). You must know this repo well. Thanks. I understand and it helps.

LynnHo commented 4 years ago

@FriedRonaldo Yes, we use the intensity of {-2, 2} with the 64x64 model for all quantitative evaluations.

As @CastellanLiu said, it's a tradeoff between acc. and generation quality. I.e., the higher intensity at testing brings higher acc. but lower visual quality, a balanced value should be chosen.

FriedRonaldo commented 4 years ago

@LynnHo Thanks! This work must be a good baseline for my future work. I appreciate the good paper!