cszn / FFDNet

FFDNet: Toward a Fast and Flexible Solution for CNN based Image Denoising (TIP, 2018)
https://ieeexplore.ieee.org/abstract/document/8365806/
469 stars 130 forks source link

dataset generation: each training dataset with "uniform map"? #3

Closed AceCoooool closed 6 years ago

AceCoooool commented 6 years ago

Each training data with an uniform map? (or is there some training data with spatially variant noise map in training stage?)

thank you :smile:

cszn commented 6 years ago

I used uniform map.

AceCoooool commented 6 years ago

Thank you kai :+1:

Sorry to ask another question: (I want to achieve the training part) The loss function: (y_pred-y)^2/(noise_level) or [(y_pred-y)/(noise_level)]^2 (through my previous experiment in DnCNN-B, the first one can get similar results in your paper.)

cszn commented 6 years ago

I think pixel-level loss functions such as L1 and L2 can give the same final results. I used this loss function.

AceCoooool commented 6 years ago

Thank you . However, If the batch-size is small. there is a considerable oscillation in loss. (Due to each batch's noise level is far away. ) Yeah, large-batch without this problem.~ Thank you again~ :smile: