vislearn / analyzing_inverse_problems

Code for the paper "Analyzing inverse problems with invertible neural networks." (2018)
84 stars 15 forks source link

Paper "Analyzing Inverse Problems with Invertible Neural Netowrks": Error in implementation #4

Open renatobellotti opened 4 years ago

renatobellotti commented 4 years ago

Hi,

I'm trying to implement the invertible network described in the paper in Tensorflow 2. I am having some difficulties matching the descriptions of the loss functions with the code.

Especially, I think there might be an inconsistency in this file:

If I've understood correctly, the function loss_reconstruction (that is almost undescribed in the paper) seems to use the following layout for the values that are fed to the sampling process:

Bildschirmfoto von 2019-12-06 12-24-45

However, the train_epoch function seems to use a different layout:

Bildschirmfoto von 2019-12-06 12-25-05

Is this a mistake, or does the output of the forward process really have a different format than the input of the inverse process?

ardizzone commented 4 years ago

Hi,

thank you for raising the issue! I am sorry to say I am on holiday the rest of the year, and I won't be able to check in more detail until January.

Until then, if in doubt, stick with the description in the paper.

renatobellotti commented 4 years ago

Ok, thanks for your answer, enjoy your holidays! :)

renatobellotti commented 4 years ago

I was now able to write a Tensorflow implementation of the invertible network. It can reproduce the results of the toy-8 example.

During the process of implementing this, I've discovered a few things others might benefit from when working with this paper:

But apart from these suggestions: Great work, this will be valuable for lots of problems in science!

P. S.: The problem I've had in my first post is not present in the toy-8 demo notebook, but in the inverse_problems_in_science folder I've linked in my original post.

krishnarld commented 2 years ago

Hi,

I'm trying to implement the invertible network described in the paper in Tensorflow 2. I am having some difficulties matching the descriptions of the loss functions with the code.

Especially, I think there might be an inconsistency in this file:

If I've understood correctly, the function loss_reconstruction (that is almost undescribed in the paper) seems to use the following layout for the values that are fed to the sampling process:

Bildschirmfoto von 2019-12-06 12-24-45

However, the train_epoch function seems to use a different layout:

Bildschirmfoto von 2019-12-06 12-25-05

Is this a mistake, or does the output of the forward process really have a different format than the input of the inverse process?

Hi,

I have the same confusion. Have you figured it out? Please let me know. I could really use your help. Thanks.