HI team, I am currently using pix2pix model for image translation. My inputs are of dimension 333 x 333 pixels. However, the prediction output from the pix2pix notebook implementation is 256 x 256. How can I retain the original dimension for my output? Thank you!
HI team, I am currently using pix2pix model for image translation. My inputs are of dimension 333 x 333 pixels. However, the prediction output from the pix2pix notebook implementation is 256 x 256. How can I retain the original dimension for my output? Thank you!