khuhm / W-Net

W-Net: Two-Stage U-Net With Misaligned Data for Raw-to-RGB Mapping
MIT License
6 stars 3 forks source link

The training code #1

Open DeepAnonymous opened 4 years ago

DeepAnonymous commented 4 years ago

Hi @khuhm,

Thank you for sharing the code. Could you please also share the training code since I'm interested in re-training with a few tweaks for testing and studying purpose.

I'm sure it'd be a great help to the community.

Looking forward to it. Thanks in advance.

khuhm commented 4 years ago

Hi,

Thank you for your interest in W-Net. The training code is quite messy and hard to use since we use a two-stage learning strategy and model ensemble with different loss functions.

I will try to share the code after cleaning it up. I guess it will take some time.

DeepAnonymous commented 4 years ago

Thank you for your reply.

Could you please share the code to compute the Loss first (the loss that is less variant to the alignment of training data and encourages the network to generate well color-corrected images.) This would be very useful for the time being.

Thank you in advance.

khuhm commented 4 years ago

Hi, @DeepAnonymous

I used VGGLoss function in loss.py (I just uploaded it) to handle the misalignment of training data. Also, I used CosineSimilarity function of pytorch (torch.nn) to generate well color-corrected images as follows,

from torch.nn import CosineSimilarity from torch.nn.functional import interpolate

color_criterion = CosineSimilarity(dim=1, eps=1e-6)

hue_sim = color_criterion(interpolate(predicted_RGB_images, scale_factor=0.5), interpolate(target_RGB_images, scale_factor=0.5))

color_loss = 1. - torch.mean(hue_sim)

Hope this helps.

DeepAnonymous commented 4 years ago

Yes. Thank you.

I'm still looking forward to the full training code.

Hope you still working on it. This would be very useful for the community.