BorealisAI / noise_flow

Noise Flow: Noise Modeling with Conditional Normalizing Flows
Other
147 stars 20 forks source link

The details of model #10

Open StonERMax opened 3 years ago

StonERMax commented 3 years ago

The training's input is the clean image and noise. The training is along the forward direction , "Sdn->A4->Gain->A4", as the figure 3 in paper while all layers use the inverse calculation (train_multithread function in code). The sampling's input is the clean image with Gauss. The sampling is along the inverse direction (reversed model) while all the layers use the forward calculation (sample_multithread function in code).    I wonder if my understanding above is correct.    Why does the model operate in the forward direction while using the inverse calculation ?

AbdoKamel commented 3 years ago

Hi,

This is just a convention. You may simply flip the figure and call the training direction "inverse" and the sampling direction "forward" and nothing would change, that is, the internal operation in the layers would not change. Hope this helps!

StonERMax commented 3 years ago

The training process is to sample the noise distribution (the latent space like z in Glow) to the noisy image (the data space like x in Glow). The goal of this work is to get the noise distribution at last (which is the inference in Glow). Therefore, the inverse direction is used as training here which is different from Glow which use the data space to approximate latent space as training. I think my understanding may be the same as you and if there is anything wrong, please let me know !

StonERMax commented 3 years ago

What's more, is there the pytorch implementation of Noise Flow?

AbdoKamel commented 3 years ago

Just to clarify, in the noise flow paper, Figure 3, and in the code:

AbdoKamel commented 3 years ago

What's more, is there the pytorch implementation of Noise Flow?

Not currently; I hope we can do it in the future.

StonERMax commented 3 years ago

In the Glow, data space distribution --> normal distribution which is the training in your comment uses the forward calculation however use the inverse calculation in Noise Flow code (such as _inverse_and_log_det_jacobian function of all layers). I think it is the difference and ask why use the inverse calculation instead of forward.