DmitryUlyanov / deep-image-prior

Image restoration with neural networks but without learning.
https://dmitryulyanov.github.io/deep_image_prior
Other
7.79k stars 1.42k forks source link

Are masks required for successful restoration? #58

Open deeptibhegde opened 5 years ago

deeptibhegde commented 5 years ago

Hello, I have run the super-resolution and inpainting codes on the provided examples as well as my own test cases. Noise which is already present in the image i.e. not added by the mask is reproduced in the generated image. Similar results for the text mask (text already present in image not removed.)

Does this mean the inverse operation only works for masks applied to the image? Doesn't this reduce the potential for real-world application?

I apologize for any misunderstanding, but some clarification would be appreciated, Thank you

deeptibhegde commented 5 years ago

Edit: "restoration" not "super-resolution"

fengyayuan commented 5 years ago

I couldn‘t run the code of "restoration".I had this problem when I run "restoration":name 'Concat' is not defined.I don't know why the "concat" is not defined(the "concat" from the "skip").Do you know how to fix it? Thank you.

deeptibhegde commented 5 years ago

@fengyayuan I did not face a similar issue, however did you ensure that you have a stable version of PyTorch installed? Were you able to run the other codes? Did you face a similar problem using UNet architecture?

fengyayuan commented 5 years ago

@fengyayuan I did not face a similar issue, however did you ensure that you have a stable version of PyTorch installed? Were you able to run the other codes? Did you face a similar problem using UNet architecture?

Thanks for your reply,i can use the UNet in the "super-resolution",but couldn't use skip in the "restoration".I'm not sure if it's my pytorch's instability.

AlexanderZhujiageng commented 4 years ago

@d-b-h I find the same issue in the in-painting code. In the code, the MSE is calculated by comparing out*mask_var and img_var*mask_var. But if we do that, we assume we know the noise/mask and this seems somehow cheating.

I will try adding some regularization to see whether it can get rid of the mask. Have you had more tests or ideas since then?

Thanks

deeptibhegde commented 4 years ago

@AlexanderZhujiageng I haven't been successful in getting decent results with unknown masks. I read the paper again, but it is not clear to me if this aspect is inherent to the method or if I am just missing something!

Did the regularisation work?

AlexanderZhujiageng commented 4 years ago

@d-b-h The regularization didn't work. I have double checked the paper. In the paper the loss function is: image It assumes we know the what is the original image is. It seems that we have to use large dataset to train the network, like we usually do, then this can work with unknown masks.

abhishekaich27 commented 4 years ago

I am late to the party, but here are my inputs:

Hope this helps.

lzhengchun commented 3 years ago

@DmitryUlyanov how do you explain the mask in the loss function?

it seems intuitive to me that the mask are 1s for missing pixels because we want to predict those pixels, so the loss function should address that. for those valid pixels, the model do not need to be able to recover anything, thus the loss function does not need to pay attention, so it seem intuitive to filter them out by multiplying 0.

I dig into your implementation, it seems that you used mask in an opposite way as i described above. I could not find much more detail about it in the paper, could you please explain the logic?

Thanks

abhishekaich27 commented 3 years ago

@abhishekaich27 how do you explain the mask in the loss function?

it seems intuitive to me that the mask are 1s for missing pixels because we want to predict those pixels, so the loss function should address that. for those valid pixels, the model do not need to be able to recover anything, thus the loss function does not need to pay attention, so it seem intuitive to filter them out by multiplying 0.

I dig into your implementation, it seems that you used mask in an opposite way as i described above. I could not find much more detail about it in the paper, could you please explain the logic?

Thanks

This is not my implementation. The author is @DmitryUlyanov !

lzhengchun commented 3 years ago

@abhishekaich27 how do you explain the mask in the loss function? it seems intuitive to me that the mask are 1s for missing pixels because we want to predict those pixels, so the loss function should address that. for those valid pixels, the model do not need to be able to recover anything, thus the loss function does not need to pay attention, so it seem intuitive to filter them out by multiplying 0. I dig into your implementation, it seems that you used mask in an opposite way as i described above. I could not find much more detail about it in the paper, could you please explain the logic? Thanks

This is not my implementation. The author is @DmitryUlyanov !

oh, sorry for miss reference. thanks for response

crazyn2 commented 2 years ago

After reading the example code, I thought it was unable to implement inpainting via this project because the author just operate the unet like GAN. In the example,it has to get the original image not corruputed as base tensor of loss function to make network generate clear image. In real word, we only have corrupted image and mask not original image which are useless to this procedure. To sum up, I conclude that this project is useless under inpainting circumstance.

crazyn2 commented 2 years ago

A unrealistic paper.

Breezewrf commented 1 year ago

After reading the example code, I thought it was unable to implement inpainting via this project because the author just operate the unet like GAN. In the example,it has to get the original image not corruputed as base tensor of loss function to make network generate clear image. In real word, we only have corrupted image and mask not original image which are useless to this procedure. To sum up, I conclude that this project is useless under inpainting circumstance.

can't agree more!