I was training the rrdn model for mass spec images. I converted the grayscale images to 'rgb' by stacking up the same channel.
However, I had a look at the predicted images from low-res files and square or grids artifacts are filled up in the generated images. Here are two examples with the corresponding ground truths:
I noticed in #134 the solution for the grid artifact was to use padding_size. But my images are not huge, they are (176, 92), (176, 144), (200, 152) and (100, 136). What could be a potential solution to get rid of these artifacts?
Hi,
I was training the rrdn model for mass spec images. I converted the grayscale images to 'rgb' by stacking up the same channel.
However, I had a look at the predicted images from low-res files and square or grids artifacts are filled up in the generated images. Here are two examples with the corresponding ground truths:
I noticed in #134 the solution for the grid artifact was to use padding_size. But my images are not huge, they are (176, 92), (176, 144), (200, 152) and (100, 136). What could be a potential solution to get rid of these artifacts?
Thank you! Tim