juglab / n2v

This is the implementation of Noise2Void training.
Other
387 stars 107 forks source link

One U-Net per Channel Option #79

Closed tibuch closed 4 years ago

tibuch commented 4 years ago

This PR adds the option to train a U-Net for each channel independently. This is achieved by splitting the input channels and feeding them through independent U-Nets. The last layer concatenates the outputs of the channel-U-Nets. From an API point of view nothing changes. But the number of parameters gets roughly multiplied by number of channels.

This option turned on by default! For single channel images virtually nothing changes. Multi-channel images will require more memory with this change and training will take longer. In the BSD68 reproducibility and the RGB example the option is turned off.

I added a few lines to the notebooks explaining the new parameter.

turekg commented 4 years ago

It all seems straightforward to me. Do you think it's worth while to have functional tests with and without this new option? Since we're not able to create a baseline, it would merely make sure that things "work" in both cases

turekg commented 4 years ago

Sorry @tibuch what exactly do you want me to do with the notebooks?

tibuch commented 4 years ago

Just double check if they run and nothing looks funny.

Thank you!

turekg commented 4 years ago

@tibuch In N2V_DataWrapper, lines 55 and 56 you have

        self.X_Batches = np.zeros((self.X.shape[0], *self.shape, self.n_chan), dtype=np.float32)
        self.Y_Batches = np.zeros((self.Y.shape[0], *self.shape, 2*self.n_chan), dtype=np.float32

Are those '*' meant to be there in the second argument? Eclipse does not not like it much :0(

tibuch commented 4 years ago

self.shape is a tuple and the * unpacks this list into single element and passes them as arguments.

Did I change this?

turekg commented 4 years ago

No actually... Wierd now it's not being flagged anymore

tibuch commented 4 years ago

@turekg do you think we can merge?

tibuch commented 4 years ago

Resolves #33.