creotiv / hdrnet-pytorch

Unofficial PyTorch implementation of 'Deep Bilateral Learning for Real-Time Image Enhancement', SIGGRAPH 2017 https://groups.csail.mit.edu/graphics/hdrnet/
219 stars 43 forks source link

how many epochs get the good trained model? #22

Open alexliyang opened 2 years ago

alexliyang commented 2 years ago

I use FiveK data set , resize the jpg to 480P , and use expertC as the target, but after 20 epochs , the test result is not good .

how many epochs you trained while you get the good model?

creotiv commented 2 years ago

Can you she image output that you have?

On Fri, Jan 21, 2022, 3:14 AM alexliyang @.***> wrote:

I use FiveK data set , resize the jpg to 480P , and use expertC as the target, but after 20 epochs , the test result is not good .

how many epochs you trained while you get the good model?

— Reply to this email directly, view it on GitHub https://github.com/creotiv/hdrnet-pytorch/issues/22, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAB5CDOOV6B6FVUPLZNFKFTUXCXOHANCNFSM5MOHZIXA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you are subscribed to this thread.Message ID: @.***>

alexliyang commented 2 years ago

the network blocked upload image :(

the train data I used likes trainning 3DLUT model, 480p jpg file as the original , and use expertC's as the ground truth, then use this repo code train the hdrnet model.

is this dealing right? I will try upload later

alexliyang commented 2 years ago

here is the goundtruth: a0001_t

and the inference out is

out

I think they have more difference, how to improve the training model?

Can you she image output that you have? On Fri, Jan 21, 2022, 3:14 AM alexliyang @.> wrote: I use FiveK data set , resize the jpg to 480P , and use expertC as the target, but after 20 epochs , the test result is not good . how many epochs you trained while you get the good model? — Reply to this email directly, view it on GitHub <#22>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAB5CDOOV6B6FVUPLZNFKFTUXCXOHANCNFSM5MOHZIXA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub. You are receiving this because you are subscribed to this thread.Message ID: @.>

creotiv commented 2 years ago

The problem here that you never get same result as ground truth, because of variance in editing even with the same author.

I also have problems with this implementation, cause training somehow different from original. Many people tried to understand why, we first though the problem in grid_sample function, but then found that it work correctly. so really here are still many questions

creotiv commented 2 years ago

I have smaller test dataset for such models, you can try to use it https://www.kaggle.com/anikishaev/photo-retouch-sm-ds But variance there much bigger

creotiv commented 2 years ago

ive added bilateral_slice from original repo compiled for jit. But still has some problems with optimization for some reason. So i think grid_sample was working correctly

Varato commented 2 years ago

I just noticed another thing from the original paper.

Where the paper states the PSNR is 28.8, they are talking about the HDR+ task. In the HDR+ task, they did NOT use the expert-retouched photos as the ground truth. Instead, they use the HDR+ ("a complex hand-engineered photographic pipeline that includes color correction, auto-exposure, dehazing, and tone-mapping.") to process the raw photos in the FiveK to get the ground truth to train the model. On the other hand, for the "Learning from human annotations" task, they did not mention PSNR any more, but the L*a"b color error alone.

So my guess is that the HDR+ pipeline outputs much more consistently styled photos than human retouching. So maybe we shouldn't expect training on the FiveK with expert_* as GT can hit the high PSNR.

Trying to verify this theory still (by utilizing some ISP pipeline to get consistent GTs).

creotiv commented 2 years ago

ive tested both networks original and mine on same datasets. Original working, mine for some reason not. You dont need HDR+ for the task, it works pretty good on simple photos with editors.

Kindly yours, Andrey Nikishaev

Areas ML/DS/CV/Soft Dev/BizDev/Growth Hacking/Customer Rel/IT LinkedIn http://ua.linkedin.com/in/creotiv GitHub http://github.com/creotiv Slideshare https://www.slideshare.net/anikishaev/ AnimalHelp https://patreon.com/uah Telegram @anikishaev Mobile +380632410666

On Thu, May 26, 2022 at 8:56 AM Xin Chen @.***> wrote:

Hi, thanks for reply.

I just noticed another thing from the original paper.

Where the paper states the PSNR is 28.8, they are talking about the HDR+ task. In the HDR+ task, they did NOT use the expert-retouched photos as the ground truth. Instead, they use the HDR+ ("a complex hand-engineered photographic pipeline that includes color correction, auto-exposure, dehazing, and tone-mapping.") to process the raw photos in the FiveK to get the ground truth to train the model. On the other hand, for the "Learning from human annotations" task, they did not mention PSNR any more, but the L*a"b color error alone.

So my guess is that the HDR+ pipeline outputs much more consistently styled photos than human retouching. So maybe we shouldn't expect training on the FiveK with expert_* as GT can hit the high PSNR.

Trying to verify this theory still (by utilizing some ISP pipeline to get consistent GTs).

— Reply to this email directly, view it on GitHub https://github.com/creotiv/hdrnet-pytorch/issues/22#issuecomment-1138182466, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAB5CDPGTJUSTI7BHT2HNVTVL4HAHANCNFSM5MOHZIXA . You are receiving this because you commented.Message ID: @.***>