creotiv / hdrnet-pytorch

Unofficial PyTorch implementation of 'Deep Bilateral Learning for Real-Time Image Enhancement', SIGGRAPH 2017 https://groups.csail.mit.edu/graphics/hdrnet/
228 stars 45 forks source link

Question about the guidance map auxiliary network #6

Closed yl-precious closed 3 years ago

yl-precious commented 4 years ago

Hello, in this implementation, GuideNN is realized by two 11 convolutional layers. But in the original paper, they used 33 color transformation matrix and linear transfer function to obtain the guidance map.
I am not clear wheather they are the same. Will this impact the performance? If I want to reimplement this, what should I do?

creotiv commented 4 years ago

They also used 2 x conv-1 https://github.com/google/hdrnet/blob/master/hdrnet/models.py#L199 and also guidance map is a single chanell image, so i dont know how they get it from 3chanel conv operation.

If I want to reimplement this, what should I do?

git clone ... implement

yl-precious commented 4 years ago

I found in this reposioty, https://github.com/mgharbi/hdrnet/blob/78a063200f/hdrnet/models.py#L145-L190 the author gives the source code about getting the guidance map.

creotiv commented 4 years ago

this is another type of guidance, and it not very good. thats why i didn't do it

Kindly yours, Andrey Nikishaev

Areas ML/DS/CV/Soft Dev/BizDev/Growth Hacking/Customer Rel/IT LinkedIn http://ua.linkedin.com/in/creotiv GitHub http://github.com/creotiv Slideshare https://www.slideshare.net/anikishaev/ Skype creotiv.in.ua Mobile +380632410666

On Tue, Jun 2, 2020 at 4:56 AM yl-precious notifications@github.com wrote:

I found in this reposioty, https://github.com/mgharbi/hdrnet/blob/78a063200f/hdrnet/models.py#L145-L190 the author gives the source code about getting the guidance map.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/creotiv/hdrnet-pytorch/issues/6#issuecomment-637221401, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAB5CDJUTDCR6B42GZP5YPLRURL6NANCNFSM4NP4JE7Q .

yl-precious commented 4 years ago

Okay, Thanks for your reply. Recently, I am intersted in style transfer task and confused by two questions. could you give some advice?

  1. For photorealistic style transfer, are the training content/style pairs usually sampled from their respective datasets ?
    I found that sampling two arbitrary photos as inputs makes the training process hard. Should I split the training samples into different categories( ie, content and style images are both house, landscape) ? But as is seen in many papers, they all don't split the data into different categories for training.
  2. Should I set the positive and negtive samples? Beacause I trained my network with coco_train as content images and HDR+ Burst Photography Dataset as style images, and found that my network can't realize the style transfer performance.
    Should I use the artisic images as the style images?
FranYi commented 4 years ago

Okay, Thanks for your reply. Recently, I am intersted in style transfer task and confused by two questions. could you give some advice?

  1. For photorealistic style transfer, are the training content/style pairs usually sampled from their respective datasets ? I found that sampling two arbitrary photos as inputs makes the training process hard. Should I split the training samples into different categories( ie, content and style images are both house, landscape) ? But as is seen in many papers, they all don't split the data into different categories for training.
  2. Should I set the positive and negtive samples? Beacause I trained my network with coco_train as content images and HDR+ Burst Photography Dataset as style images, and found that my network can't realize the style transfer performance. Should I use the artisic images as the style images?

A paper is recommended: Joint Bilateral Learning for Real-time Universal Photorealistic Style Transfer https://arxiv.org/pdf/2004.10955.pdf