Open zhouzhiyang-666 opened 2 years ago
The size of input picture of pretrained model provided by author is 400 * 400 pixels, so if you don't modify the network structure and train separately for your picture size, just want to use the pretrained model, you need to modify your picture size. tf.image.resize_image_with_crop_or_pad
or other image processing functions could be useful to resize the image
I want to make the encoded image the same as the pixels before encoding.
I changed encode_image.py separately. width and height modify to 500.
It throws the following question.
Traceback (most recent call last):
File "encode_image.py", line 94, in
Only the changes of encode_image.py
can't work. The structure of the pre-trained model provided by the author determines that your input CAN ONLY BE the size of 400 400. If you want the size of input and output to be 500500, you need to re-train a model, which requires changing the code in models.py
and train.py
.
A simpler method is that you can write a function to resize picture to 400 400 before it is input into the model, and again resize it to 500 500 after output. However, since the author did not add the process of it to the analog noise, which includes the process of interpolation and down sampling. I guess the decoding accuracy after scaling will be impacted more or less.
Thank you for your reply! Now I am trying to do Images for lossless zoom.
ValueError: Cannot feed value of shape (1, 200, 200, 3) for Tensor 'input_hide:0', which has shape '(?, 400, 400, 3)' How to achieve the encoded image pixels and the encoded pixels consistent?