Hi,I have a question when using perceptual loss, what kind of input data does the function VGG19_slim accept?
elif FLAGS.perceptual_mode == 'VGG22':
with tf.name_scope('vgg19_1') as scope:
extracted_feature_gen = VGG19_slim(gen_output, FLAGS.perceptual_mode, reuse=False, scope=scope)
with tf.name_scope('vgg19_2') as scope:
extracted_feature_target = VGG19_slim(targets, FLAGS.perceptual_mode, reuse=True, scope=scope)
In this part of code , should I normalize gen_output and targets to [-1,1], or [0, 1], or [0, 255]? The code only told me to resize input to (224*224), but didn't show the type of input data. And it seems important how to normalize the input data.
Hi,I have a question when using perceptual loss, what kind of input data does the function VGG19_slim accept?
elif FLAGS.perceptual_mode == 'VGG22': with tf.name_scope('vgg19_1') as scope: extracted_feature_gen = VGG19_slim(gen_output, FLAGS.perceptual_mode, reuse=False, scope=scope) with tf.name_scope('vgg19_2') as scope: extracted_feature_target = VGG19_slim(targets, FLAGS.perceptual_mode, reuse=True, scope=scope)
In this part of code , should I normalize gen_output and targets to [-1,1], or [0, 1], or [0, 255]? The code only told me to resize input to (224*224), but didn't show the type of input data. And it seems important how to normalize the input data.
Looking forward to your response, thanks a lot!