allenai / satlas-super-resolution

Apache License 2.0
190 stars 24 forks source link

Inference #5

Closed oguzhannysr closed 7 months ago

oguzhannysr commented 7 months ago

Hello, I am trying to use this pre-trained model https://pub-956f3eb0f5974f37b9228e0a62f449bf.r2.dev/satlas_explorer_datasets/super_resolution_models/esrgan_orig_12S2.pth but I am getting the following error.

image

My input image is like this;

image

This is my yml file; infer_example.yml.txt

piperwolters commented 7 months ago

It looks like you need to change the argument, "num_in_ch", under "network_g" to 36 instead of 24. And make sure your rgb input contains 12 Sentinel-2 images in the format described in the README.

oguzhannysr commented 7 months ago

I didn't understand the 12 Sentinel-2 images part, and I couldn't see it in readmd either.

piperwolters commented 7 months ago

The required format of the input Sentinel-2 data is described here.

If you have 12 Sentinel-2 images, with a shape of [12, h, w, 3], then you will need to reshape it to [12*h, w, 3] and save it as a png.

oguzhannysr commented 7 months ago

I understand, but can't I make an inference with a single image? Do I need to have at least 12 images?

piperwolters commented 7 months ago

Currently, we have just uploaded model weights for models that expect 2, 6, 12, or 18 Sentinel-2 images. I can upload weights for a single-image model in the next few weeks.

oguzhannysr commented 7 months ago

@piperwolters ,"If you have 12 Sentinel-2 images, with a shape of [12, h, w, 3], then you will need to reshape it to [12*h, w, 3] and save it as a png." Is there any way or helpful code on how to do this?

Is this right what I did? Undivided by 255. 2 sentinels-2 merged_image

piperwolters commented 7 months ago

Example code to do the reshaping:

import skimage.io
import numpy as np
arr = np.random.rand(12, 32, 32, 3)   # numpy array containing 12 Sentinel-2 images of size 32x32x3
reshape_arr = np.reshape(arr, (12*32, 32, 3))   # reshape this array so we can save it as a png
skimage.io.imsave('testing.png', reshape_arr)   # save this array as a png

The Sentinel-2 time series saved in this format will look like the following example image, though this example has more than 12 Sentinel-2 images in the time series: tci

oguzhannysr commented 7 months ago

Thank you for your efforts. I prepared the input data. But the error I mentioned at the beginning continues, even though I edited the yml file.

yml file infer_grid_example.yml.txt

sample input data slitchimage_0

Error image

oguzhannysr commented 7 months ago

Thank you, I solved the problem, setting n_lr_images: 2 solved it. But while running the infer_grid.py file, I got an error in a file towards the end of the files.

image

oguzhannysr commented 7 months ago

I solved this too, the only problem remains the stitching process. I'm getting an error when combining these. Also, coordinating the image is required.

piperwolters commented 7 months ago

Glad to hear you solved those issues. To be clear, are you dividing your 2000x2000 image into 32x32 chunks before running inference?

If so - the infer_grid.py file has a few hard-coded values that you will likely need to change in this section of code. By default, it is expecting to stitch a 16x16 grid of both 32x32 Sentinel-2 images and 128x128 Super-Resolved images. If you have somehow divided your 2000x2000 image into overlapping 32x32 chunks, then this stitch code will not work since it expects non-overlapping, perfectly aligned chunks. If you reshaped your 2000x2000 image to be divisible by 32 before chunking it into 32x32 chunks, then you may just need to change the hard-coded input args to the stitch() function.

Alternatively, you could try dividing your 2000x2000 image into 125x125 chunks (2000 / 16 = 125). Then run the inference script on that. The stitching should work then. That would not perfectly match how the models were trained, but I've found small scale differences to be ok.

oguzhannysr commented 7 months ago

@piperwolters ,Yes, first of all, I combine my two sentinel-2 images as a time series as you sent, divide them into 3232 pieces and run the model in the infer_grid.py file. I made some changes and got 128128 sr images. In some places the code did not work, so I assigned black pixels to them. I will show you these results below.

But the problem is not the merging of these 128*128 images, but the georeferencing. I wrote a code for this but I need to improve it. I haven't decided what to do for this yet. Can you help me with this? How can I do Affine transformations

code for georeferencing:

STICH.py.txt

result stiched image

https://drive.google.com/file/d/1cSQEPYYX08BELVyZYgrcpAcpPxyyfOui/view?usp=sharing one of the two sentine https://drive.google.com/file/d/1QKuEr2YeZDvbo0ovDUBO8MxWiQ8pS6W3/view?usp=sharing

piperwolters commented 7 months ago

The input Sentinel-2 image and stitched output looks good aside from the black chunks (do you have the error / reason why those did not work)?

oguzhannysr commented 7 months ago

@piperwolters ,The reason for the black pixels is that when dividing the image into 32x32 pieces, some parts are not 32x32 in size. I'm also looking forward to your help regarding georeferencing. My last question is that I want to run the model for a single image. Is there any progress regarding the situation you mentioned above?

favyen2 commented 7 months ago
  1. Not 32x32 in size -- you can pad the input image to be a multiple of 32 in both dimensions. Then crop the resulting output to 4 times the original width and height (before padding).
  2. Georeferencing -- it makes sense to copy the transform and CRS from the same GeoTIFF that you are using for the input. But you need to multiply the resolution by 4. With rasterio it should have Affine object like this:
Affine(x_off, x_res, 0, y_off, 0, y_res)

So you can divide the x_res and y_res by 4 (they should be like meters/pixel, input is 10 m/pixel and output is 2.5 m/pixel.

If this doesn't help then please clarify what issue you are having with your georeferencing code right now.