juglab / n2v

This is the implementation of Noise2Void training.
Other
387 stars 107 forks source link

Understanding a Poor Result #97

Closed JohnCraigPublic closed 3 years ago

JohnCraigPublic commented 4 years ago

I am trying N2V on black and white radiographs. These are 16-bit-per-pixel images. I am applying N2V on the raw image from the sensor, and then applying our usual image processing that enhances edges and contrast. But the resulting image that had N2V processing shows "bands" -- reminds me of how an 8-bit image might look on this kind of data. So, I checked all my file input and output used to train N2V -- I verified that the input is 16-bit-per-pixel, and that the output of applying N2V is indeed 16-bit-perpixel -- and there are not 'missing codes' - so it seems to be true 16-bpp data. So, perhaps the 'bands' are not due to a loss of pixel gray depth -- but something else. I'm assuming that the N2V model uses floating point numbers throughout, so as long as I am careful with feeding it good data, and careful with converting its output back into 16-bpp data, then this should not be the source of problems.

Do you have other ideas about the poor result? Should I try "Struct2V" in case I have structured noise? But if it were structured noise, I expected N2V would just not be very effective, not that it would introduce these bands artifacts.

Thanks if you have any ideas for me. I'm really impressed with your N2V work and hoped it might be a useful tool in my domain which is not microscopy. I will try to attach crops from one example image: without N2V processing and with N2V processing. Original Image N2V Image

tibuch commented 4 years ago

Hi @JohnCraigPublic,

N2V converts the images to float32 and all computations are performed in float32. So precision should not be a problem.

I am not entirely sure which 'bands' you are referring to. The long diagonal ones? Following the shape of the tissue? Or some other ones which might be gone because of the jpg compression?

In Figure 6 of our paper we describe how N2V amplifies structured noise in images. One solution would be to subtract a dark image from the raw image to reduce the structured noise artifacts and then apply N2V. A second option would be to try out StructN2V, which addresses this problem by adapting the shape of the blind-spot.

You could also check the FT of your images and see if you can identify some peaks, which correspond to a repeating pattern. Both in the raw and denoised data.

fjug commented 4 years ago

Hi @JohnCraigPublic,

the bands you’re referring to liok quite interesting indeed (I also expect you mean the diagonal plateau-like patterns). On what data did you train? How much of it? Is it public (could we have a look)? I’m not 100% sure yet how these patterns are so emphasized...

Best, Florian

JohnCraigPublic commented 4 years ago

Thank you for your responses. Yes, referring to the long band-like artifacts that are at about a 45-degree angle in the image.

In our system a dark image is always subtracted from the raw image.

I trained on 60 images, and they are fairly large, typical is 2500 x 2000 pixels. All images were from the same imaging hardware with same subtraction of the dark-image pre-processing. I wouldn't want to post the database publicly, but could get it to you if you actually have resources and interest to spend a little time on this.

One question I have: my training images are realistic actual-use radiographs of animals. As such, there are regions of bones, regions of varying density soft-tissues, and, perhaps importantly, often a 'collimator shadow' - a lighter portion of the radiograph near one or more edges of the image where little to no radiation hit the detector. I don't know if these varying image regions cause difficulty for N2V for some reason. Lets say the great hope for N2V would be for the "soft tissue" regions - then perhaps I should put together a set of training images composed only of those sub-regions of realistic images -- so N2V would 'learn' the image statistics of the important portion of the image. But then, at run-time (inference time) what would it do to the 'other regions' of the image -- which it never saw examples of during training?

I suppose I might try Struct2V if you think that is the next thing to try....

citypalmtree commented 3 years ago

@JohnCraigPublic , hi

Have you corrected your issue with Struct N2V? I get this result on certain images that are more saturated than the others.

How did you come about solving your issue?

JohnCraigPublic commented 3 years ago

@citypalmtree - I did not solve the issue.

I now tried Struct2V and the result was even worse. So, I'm not sure what is the issue.

citypalmtree commented 3 years ago

@JohnCraigPublic Right, that was my case as well. I tried StructN2V because our images have a structured pattern of noise, but the outcome wasn't as good as regular N2V.

tibuch commented 3 years ago

Hi @JohnCraigPublic,

Could it be that these diagonal stripes are the 'collimator shadow'? What would be an experiment which could confirm or reject this hypothesis?

Your idea of only training on soft tissue makes sense if you are only interested in the soft tissue. For the other regions in the image N2V would certainly predict something (which might even look good), but it should not be used for any further downstream processing since it was not trained to denoise such regions.

Would it be possible to only provide the raw data of the already posted image? Maybe we see something which we can't see in the jpg image.

tibuch commented 3 years ago

I will close this issue for now. Please feel free to reopen it if necessary.