abria / TeraStitcher

A tool for fast automatic 3D-stitching of teravoxel-sized microscopy images
http://abria.github.io/TeraStitcher/
Other
82 stars 32 forks source link

stitching RGB images works better when source image was intensity inverted #45

Open rmd13 opened 5 years ago

rmd13 commented 5 years ago

11111 22222

I have 2 RGB images stitched using TeraStitcher, and found that it works better when image was inverted. The image was chemical stained neurons showing black color on neurons. However, at the boundary of the stitched image I found that this region was the only region that not stitched very well(lower figure). I guess that stitching use the brightest feature to registrate, but the neuron here is dark, so indeed the background were used to stitch. To solve the problem,I inverted the source RGB images and re-do stitching, and found that this region was stitched much better(upper). I also noticed that the intensity at this part of neuron are more blur and lighter than it should be, as compared with other neurons in source image. I also noticed that a 10-pixel offset at V axis in lower figure also disappear in upper figure, and merged image size also changed from 4686x2026 to 4588x2016. But when I compare the two images, I did not see the loss of pixels or image content. So weired.

iannellog commented 5 years ago

I will try to comment your observations to clarify some points. See below within the text.

Il giorno dom 3 feb 2019 alle ore 17:29 rmd06 notifications@github.com ha scritto:

[image: 11111] https://user-images.githubusercontent.com/22294036/52179234-b1432200-2812-11e9-899a-f793de7cc40f.png [image: 22222] https://user-images.githubusercontent.com/22294036/52179235-b1dbb880-2812-11e9-8b31-a9171a58048b.png

I have 2 RGB images stitched using TeraStitcher, and found that it works better when image was inverted.

Actually the MIP-NCC align algorithm has been designed assuming relatively sparse structures in the foreground and a dark background. It is not surprising that in your case inverting the image delivers better results. I will consider to include this as an option so as the image is inverted on-the-fly just in the alignment phase. Let me know if you consider this a useful additional feature.

The image was chemical stained neurons showing black color on neurons.

However, at the boundary of the stitched image I found that this region was the only region that not stitched very well(lower figure). I guess that stitching use the brightest feature to registrate, but the neuron here is dark, so indeed the background were used to stitch.

correct

To solve the problem,I inverted the source RGB images and re-do stitching,

and found that this region was stitched much better(upper). I also noticed that the intensity at this part of neuron are more blur and lighter than it should be, as compared with other neurons in source image.

this is why in the overlapping region we perform a "fusion" between the two images using a sinusoidal blending. When the two overlapping regions have small differences, this result in a slightly blurred region. Looking at the image above, seems that the neuron is imaged slightly different in the two adjacent tiles, which could explain the blurring. You can also observe in the above image that the neuron coming from the above tile become increasingly evanescent as long as it is near the bottom border: this is exactly due to the sinusoidal blending. You cold try to use a different blengìding algorithm (e.g. NO_BLEND). Look at

https://github.com/abria/TeraStitcher/wiki/User-Interface#--algorithmstring-advanced

for more details.

I also noticed that a 10-pixel offset at V axis in lower figure also

disappear in upper figure, and merged image size also changed from 4686x2026 to 4588x2016. But when I compare the two images, I did not see the loss of pixels or image content. So weired.

the two images have been merged with different displacements. At borders this can generate small empty regions due to different alignment between tiles. This explains also the different overall image size, which corresponds to the size of the rectancle that includes all tiles that are positioned accoridng to the computed pairwaise alignments.

If you want disuss more the issue, please write me from your personal mail and send me the original images (even only the problematic subset) if possible.

Best.

--Giulio


Giulio Iannello Preside della Facolta' Dipartimentale di Ingegneria Universita' Campus Bio-Medico di Roma v. Alvaro del Portillo, 21 00128 Roma, Italy

Tel: +39-06-22541-9602 E-mail: g.iannello@unicampus.it Fax: +39-06-22541-9609 URL: https://scholar.google.it/citations?user=L-UJxIgAAAAJ


rmd13 commented 5 years ago

Yes,the . it is a useful additional feature to inverting the image before stitch.

ilykos commented 2 years ago

I will try to comment your observations to clarify some points. See below within the text. Il giorno dom 3 feb 2019 alle ore 17:29 rmd06 notifications@github.com ha scritto: [image: 11111] https://user-images.githubusercontent.com/22294036/52179234-b1432200-2812-11e9-899a-f793de7cc40f.png [image: 22222] https://user-images.githubusercontent.com/22294036/52179235-b1dbb880-2812-11e9-8b31-a9171a58048b.png I have 2 RGB images stitched using TeraStitcher, and found that it works better when image was inverted. Actually the MIP-NCC align algorithm has been designed assuming relatively sparse structures in the foreground and a dark background. It is not surprising that in your case inverting the image delivers better results. I will consider to include this as an option so as the image is inverted on-the-fly just in the alignment phase. Let me know if you consider this a useful additional feature. The image was chemical stained neurons showing black color on neurons. However, at the boundary of the stitched image I found that this region was the only region that not stitched very well(lower figure). I guess that stitching use the brightest feature to registrate, but the neuron here is dark, so indeed the background were used to stitch. correct To solve the problem,I inverted the source RGB images and re-do stitching, and found that this region was stitched much better(upper). I also noticed that the intensity at this part of neuron are more blur and lighter than it should be, as compared with other neurons in source image. this is why in the overlapping region we perform a "fusion" between the two images using a sinusoidal blending. When the two overlapping regions have small differences, this result in a slightly blurred region. Looking at the image above, seems that the neuron is imaged slightly different in the two adjacent tiles, which could explain the blurring. You can also observe in the above image that the neuron coming from the above tile become increasingly evanescent as long as it is near the bottom border: this is exactly due to the sinusoidal blending. You cold try to use a different blengìding algorithm (e.g. NO_BLEND). Look at https://github.com/abria/TeraStitcher/wiki/User-Interface#--algorithmstring-advanced for more details. I also noticed that a 10-pixel offset at V axis in lower figure also disappear in upper figure, and merged image size also changed from 4686x2026 to 4588x2016. But when I compare the two images, I did not see the loss of pixels or image content. So weired. the two images have been merged with different displacements. At borders this can generate small empty regions due to different alignment between tiles. This explains also the different overall image size, which corresponds to the size of the rectancle that includes all tiles that are positioned accoridng to the computed pairwaise alignments. If you want disuss more the issue, please write me from your personal mail and send me the original images (even only the problematic subset) if possible. Best. --Giulio ____ Giulio Iannello Preside della Facolta' Dipartimentale di Ingegneria Universita' Campus Bio-Medico di Roma v. Alvaro del Portillo, 21 00128 Roma, Italy Tel: +39-06-22541-9602 E-mail: g.iannello@unicampus.it Fax: +39-06-22541-9609 URL: https://scholar.google.it/citations?user=L-UJxIgAAAAJ


Please consider the on the fly inverting a useful feature.

Somehow, when I supply --algorithm=NOBLEND to the merge call, it doesn't seem to work. Is it a GUI option only?