Closed kabeero closed 1 year ago
That's a strange error from GIMP... Can you run the tiffinfo command on the image and paste the output?
I just installed gimp and tried it on two different images, and it appears to load okay into GIMP.
For the longer run-time. Unfortunately, creating a tiff in a scalable manner required to use the bioformats library, which appears to be very slow. I'm looking into parallelizing the blending to see if that can improve the run-time.
Another option would be to create C bindings and work directly with a low-level library like libtiff.
The old method of exporting relied on the BufferedImage object in java. This has a serious limitation that the image data cannot be larger than 2147483648 elements. So if we multiplied the width * height to allocate the BufferedImage it essentially had to be less than ~2.14 GB, if 8-bits per element. Which is why it was failing before.
For now its probably better to use the python code to assemble the image until we find a more efficient mechanism to save the image.
Also, if you have a sample image that you could share with me, I could run some tests to see what is going on. I think I know what the issue is with the image. So have a few more tests to check.
Thanks for the tip, I hadn't heard of tiffinfo
. It does appear that mediainfo
can't decipher it, but tiffinfo
is showing correct properties for the output file:
> mediainfo wafer-py.tiff
General
Complete name : wafer-py.tiff
File size : 14.5 GiB
> tiffinfo wafer-py.tiff
TIFF Directory at offset 0x16 (10)
Image Width: 142767 Image Length: 142329
Tile Width: 1024 Tile Length: 1024
Resolution: 1, 1 (unitless)
Bits/Sample: 8
Compression Scheme: AdobeDeflate
Photometric Interpretation: RGB color
Samples/Pixel: 3
Planar Configuration: single image plane
ImageDescription: {"shape": [142329, 142767, 3]}
Software: tifffile.py
Unfortunately, creating a tiff in a scalable manner required to use the bioformats library, which appears to be very slow.
Makes sense, the Bio-Formats plugin also gave me considerably longer reading / writing times when I was trying that with the ImageJ regular stitching.
I'm looking into parallelizing the blending to see if that can improve the run-time.
No worries, this skimage.imwrite
is a great stop-gap for now. I am using a macro so I plan to do what I can in ImageJ then send the rest to Python / scikit
.
scikit.io.imsave
also supports compression quite nicely with TIFFs:
skimage.io.imsave(output_filepath, stitched_img,
plugin="tifffile", tile=(1024, 1024), check_contrast=False,
bigtiff=True,
compression=tifffile.TIFF.COMPRESSION.ADOBE_DEFLATE) # requires the 'imagecodecs' package
Compression | File Size | Comments |
---|---|---|
None | 3853 MB | :-1: |
JPEG_2000_LOSSY | 801 MB | GIMP: Corrupt image |
JPEG_2000 | 801 MB | GIMP: Corrupt image |
JPEGXL | 709 MB | GIMP: Corrupt image |
ADOBE_DEFLATE | 989 MB | :+1: |
DEFLATE | 989 MB | :+1: |
ZSTD | 1022 MB | :+1: |
final output shows all black in GIMP and seems to have many more layers than before (1x previously, and now 16x in my case)
plugin=None
may not be as smart as the imsave
docstring says it is... I bet plugin=tifffile
made that problem go away. GIMP will show the Corrupt image entries with all black and give you a brief corrupt message in the progress bar, I may have missed it yesterday.
I'll see what I can do about the photos, thanks for your time looking into this
Sounds great! I did make some progress yesterday getting the RGB writer functional. I had a bug in there.
Did some preliminary profiling and there is definitely some room for parallelization during blending, so hoping to see some strong scaling when that effort is added in.
@kabeero
A new version was just pushed to Fiji. Give that version a try to write your tif. It resolved some issues with saving RGB and a performance bug that was excessively reading the original field of views multiple times. This version should be faster than the prior and produce a correct RGB image.
I'm going to look into compression next to see what we can do on that front as well. I'd imagine some slow-down when adding that in.
Also, if you see the message "Insufficient memory to hold all image tiles in memory, turning on the freeing of pixel data" then you should try to increase available JVM memory. Otherwise it will have to read data multiple times.
Have also implemented the compression options in the latest version.
Hi @tblattner,
Thanks for your new code, I've been trying it out but having a hard time getting it working entirely in MIST / ImageJ
In general, it seems displaying / outputting any stitched image with the updated MIST plugin has drastically increased in processing time and there is a single-threaded task that occurs after the
wafer.tiff
is outputted and ImageJ is locking up before I can continue processing the image. If it makes a difference, I am running the MIST plugin from an.ijm
macro, not using the GUI.I also went back to my small(er) dataset that was working fine before:
processing time went from ~2 minutes to ~15 minutes to stitch / save 864x 1.8 MP pictures
mediainfo
command-line tool does not recognize the metadata anymore, not showing any resolution or encoding information in the finalwafer-ome.tiff
final output shows all black in GIMP and seems to have many more layers than before (1x previously, and now 16x in my case)
GIMP error message:
if I set
displaystitching=false
andoutputfullimage=false
, then it's quick again and I can use the Python code you provided to assemble the tilesthere was a small issue with the Python code I will submit a PR with edits