Open Multiboxer opened 7 years ago
The model only processes an image in the fineSize dimensions. So if you pass in a bigger image it will get scaled down and potentially cropped.
To my knowledge there is no perfect solution. The optimum would be to train your model with a fineSize that is big enough to fit your original image, but except if you have a lot of GPU memory that is not always feasible.
You can also increase the fineSize on a pre-trained model, but that will often give you different results and not capture the bigger image structures so well (it usually works ok on textures)
A third option is process your image in fineSize-sized tiles, but that will also give you different results if you trained your model only on scaled down full-size images and often adjacent tiles will differ in color based on their individual content.
The model only processes an image in the fineSize dimensions. So if you pass in a bigger image it will get scaled down and potentially cropped.
To my knowledge there is no perfect solution. The optimum would be to train your model with a fineSize that is big enough to fit your original image, but except if you have a lot of GPU memory that is not always feasible.
You can also increase the fineSize on a pre-trained model, but that will often give you different results and not capture the bigger image structures so well (it usually works ok on textures)
A third option is process your image in fineSize-sized tiles, but that will also give you different results if you trained your model only on scaled down full-size images and often adjacent tiles will differ in color based on their individual content.
How can I modify the pre-trained model, by editing which file? For example I want to use the model on human face dataset?
I'm attempting to create a model that makes small adjustments to pictures however test.lua produces extremely distorted, cropped, or pixelated images. Is there any way to use a process similar to test.lua that outputs at an identical resolution to the input image?