Closed chrislos closed 1 year ago
Hi @chrislos, Thanks for reporting the issue.
Yes, in the code it actually resizes the image to 180x180 for any given input format.
img_width = 180
img_height = 180
And in the below code, it is processed.
def initialize_image():
# We start from a gray image with some random noise
img = tf.random.uniform((1, img_width, img_height, 3))
# ResNet50V2 expects inputs in the range [-1, +1].
# Here we scale our random inputs to [-0.125, +0.125]
return (img - 0.5) * 0.25
You can also provide the resolution of your desire to experiment with your model. Thanks!
This issue is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you.
This issue was closed because it has been inactive for 28 days. Please reopen if you'd like to work on this further.
https://github.com/keras-team/keras-io/blob/313a4d15b4f858de35dac1246bfd83fbd07d1a7a/examples/vision/visualizing_what_convnets_learn.py#L29
Dear Keras team,
first of all thank you for your exceptional job. I went through fchollet's article about the techniques of visualiting convnets. Thanks to your help I finally could see what my nets are "seeing". Super interesting.
There is one underling question of understanding left for me.
So, given the fact I've trained a pix2pix net with Images with a size of 2048 x 2048. Would I have to set these dimensions as input image values in your script for a scientific correct filter-visualization of a layer "x" at a filter-index "y".
It seems that smaller image-dimensions deliver similar filter-outputs (eg. 1024px x 1024px or even 512px x 512px as Input-Dimensions), even if my net was trained on a higher resolution.
Is there a downsampling procress somwhere hidden, that I haven't found in the code so far or why do smaller image-inputs also result in filters that appear to have learned patterns?
Thanks in advance, Christian