tonzowonzo / SAR_utils

A selection of functions for working with SAR data.
MIT License
16 stars 2 forks source link

Instruction on how to run? #1

Closed ziudeso closed 3 years ago

ziudeso commented 3 years ago

Hi, I'm testing your repo, it seems really promising, however I cannot get it up and running, can you provide a more detailed way of testing the images in /example folder (iceye_speckled) + a requirements.txt file?

Thanks in advance F.

tonzowonzo commented 3 years ago

Hi,

I added the requirements.txt file now, I can't attach the Iceye images to the repo due to licensing issues but I will attach some examples here. In the zip file there are 3 examples from 2 different areas, hope this helps!

Thanks, Tim sar_iceye_examples.zip https://drive.google.com/file/d/1EvJ2GgqZfZisBhzw0Z5M5FAeKnrhQvaI/view?usp=drive_web

On Sun, Mar 7, 2021 at 7:53 PM ziudeso notifications@github.com wrote:

Hi, I'm testing your repo, it seems really promising, however I cannot get it up and running, can you provide a more detailed way of testing the images in /example folder (iceye_speckled) + a requirements.txt file?

Thanks in advance F.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/tonzowonzo/SAR_utils/issues/1, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIMQF3JERULVAMH3JARGIZ3TCPDRVANCNFSM4YYERCHQ .

ziudeso commented 3 years ago

Hi, thanks for the quick reply, I've sent an access request to the folder (it's restricted).

So basically I'd love to test despeckle_sar_image.py. In the file you refer to path = "D:/sar/test3/", since the images you refer are not available in this repo can you provide instructions to apply despeckle_sar_image.py to examples/iceye/?

Thanks and keep going, this could be the best repo on speckle noise removal out there!

tonzowonzo commented 3 years ago

Hey,

I think to run it on an image in examples/iceye/ all you would need to do is change the path to point to the folder containing the images. So if it's on your C drive: path="C:/Users/examples/iceye/" should allow it to run. The outputs will also be put into this folder. Also for iceye images the scale factor shouldn't be 45, I had that whilst I was testing Capella imagery that was log scaled.

ziudeso commented 3 years ago

Hi! I'm referring to the code to process .tif images: for i in [i for i in os.listdir(path) if i.endswith("tif")]: so basically you expect .tif files with files in range [0,255]. So I guess the tif files are in the zip file you attached =) It would be awesome to include them in the repo for people like me who wanna test them Anyway one last question: you expect images to have size of 256x256 I guess, do you think it's either possible to:

  1. Apply the model to images of arbitrary size?
  2. Alternatively pass patches of size 256x256 to the model by sliding on a larger image?

Thanks a ton @tonzowonzo!

tonzowonzo commented 3 years ago

Yup, right now the images have to be a minimum size of 256,256 but you can go bigger. Despeckle_sar_image.py should automatically loop over these bigger images (except for the first 20 pixels at the start, but you could take this away).

On Mon, 8 Mar 2021, 19:03 ziudeso, notifications@github.com wrote:

Hi! I'm referring to the code to process .tif images: for i in [i for i in os.listdir(path) if i.endswith("tif")]: so basically you expect .tif files with files in range [0,255]. So I guess the tif files are in the zip file you attached =) It would be awesome to include them in the repo for people like me who wanna test them Anyway one last question: you expect images to have size of 256x256 I guess, do you think it's either possible to:

  1. Apply the model to images of arbitrary size?
  2. Alternatively pass patches of size 256x256 to the model by sliding on a larger image?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/tonzowonzo/SAR_utils/issues/1#issuecomment-792956373, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIMQF3J7NZK55C6CWJA73S3TCUGQHANCNFSM4YYERCHQ .

ziudeso commented 3 years ago

I have another question regarding running the training code: In the paper SAR Image Despeckling Using a Convolutional Neural Network the network has the following entities which I don't see in your model.py:

Can you tell us why your implementation differs from the paper's one? Is the network in model.py working the same way or it's even better?

Also, how is the function total_variation_loss (through custom_loss) employed in your model? I see you load it but it doesn't look like you make any use of it. Thanks in advance =)

F.

tonzowonzo commented 3 years ago

Wow, nicely spotted. It seems I mixed up my papers when I was creating this repo, I kind of mixed the two papers together but I may try creating a model that tries to calculate the residual noise with the skip connections. I will add the paper that made use of dilations to the readme.

As for TV, you're right I forgot to change it back from mean squared error! I'll retrain the model tonight and upload it.

Thanks, Tim

ziudeso commented 3 years ago

That would be great to have full reproducibility! I'm looking forward to see how you implement it Tim, especially the division residual layer! Thanks a ton F.

ziudeso commented 3 years ago

Hi Tom, just few other questions:

Can you elaborate? Best F.

tonzowonzo commented 3 years ago

Oops, looks like I made a mistake on the final layer of that model, I'll change it and retrain tonight.

With the models after you load them in you can print(model.summary()) and you should be able to see which model it is. I think that one is the dilation network without skip connections.

On Tue, 16 Mar 2021, 15:06 ziudeso, @.***> wrote:

Hi Tom, just few other questions:

  • noise_model_noise_synthetic.h5 -> Works well, which of the two models did you use to train it?
  • noise_model_noise_synthetic_sv_loss.h5 -> gives ValueError: Unknown loss function: total_variation_loss, which of the two models did you use to train it?
  • noise_model_synthetic_mse_loss_sar_drn.h5 -> gives:

    X in: (1, 256, 256, 1) pred out: (1, 256, 256, 64) Traceback (most recent call last): File "despeckle_sar_image.py", line 59, in pred = pred.reshape((img_size, img_size)) ValueError: cannot reshape array of size 4194304 into shape (256,256)

Can you elaborate? Best F.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tonzowonzo/SAR_utils/issues/1#issuecomment-800286644, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIMQF3MK4JHGFS3CL4JM4Q3TD5QXRANCNFSM4YYERCHQ .

tonzowonzo commented 3 years ago

As for the loss function are you loading the model like: model_cnn = load_model("/path/to/h5/model/to/load.h5", custom_objects={"total_variation_loss": custom_loss})

You also need to import the custom_loss function from model where it is defined. If it still doesn't work let me know, it may be another issue

ziudeso commented 3 years ago

Hi Tom, thanks for the support and having upgraded the README The new model looks good ;) I was trying to implement ID-CNN by following your footprints, however I face a mse loss that is nan. You'll find attached the ID-CNN model below:

def id_cnn(input_size=(256, 256, 1)):
    """

    """
    input = Input(input_size)
    conv1 = Conv2D(64, 3, activation="relu", padding="same")(input)

    conv2 = Conv2D(64, 3, activation="relu", padding="same")(conv1)
    bn2 = BatchNormalization()(conv2)

    conv3 = Conv2D(64, 3, activation="relu", padding="same")(bn2)
    bn3 = BatchNormalization()(conv3)

    conv4 = Conv2D(64, 3, activation="relu", padding="same")(bn3)
    bn4 = BatchNormalization()(conv4)

    conv5 = Conv2D(64, 3, activation="relu", padding="same")(bn4)
    bn5 = BatchNormalization()(conv5)

    conv6 = Conv2D(64, 3, activation="relu", padding="same")(bn5)
    bn6 = BatchNormalization()(conv6)

    conv7 = Conv2D(64, 3, activation="relu", padding="same")(bn6)
    bn7 = BatchNormalization()(conv7)

    conv8 = Conv2D(1, 3, activation="relu", padding="same")(bn7)

    div_skip = Lambda(lambda x: x[0] / x[1])([input, conv8])
    output = Activation(activation="tanh")(div_skip)

    model = Model(inputs=[input], outputs=[output])

    model.compile(optimizer="adam", loss="mse", metrics=["mae", "mse"])
    print(model.summary())

    return model

Do you notice anything weird? Can you try training it and see if you get a real mse and mae?

Oh, btw, lots of images in the dataset (the one from IEEE) aren't speckled so I guess we'd really enjoy you adding a "speckler" function here =) Thanks!!!

Thanks a lot =) F.

ziudeso commented 3 years ago

Nvm about the nan: just use div_skip = Lambda(lambda x: tf.math.divide_no_nan(x[0],x[1]))([input, conv8])

Still waiting for your "speckler" =)

I'll open a pull request with a code for processing images whose dimensions aren't multiple of 256! Thanks F.