Open VolkerH opened 5 years ago
I encountered very similar bug today. Fiji/BigSticther crashes while opening datasets with high pixel values. When any pixel intensity is above 32k => crash at the file opening. example dataset, 32M
Hi, what is the pixel type?
uint16 in my case.
Then you maybe do not save it "right", i.e. how we save a uint16. Maybe make a ramp TIFF image [0 ... 65535] in BigStitcher and save it as HDF5 to understand the convention? Java has only signed int16, which needs to encode an uint16
I just had a look. It seems that @constantinpape has identified and addressed this issue of mapping types between numpy and java in his code here, including value range adjustments.
https://github.com/constantinpape/pybdv/blob/master/pybdv/dtypes.py
What still confuses me is that the one tile I had in my projects is visualized correctly in BDV (so maybe there is some value range checking and clipping happening before visualization). It only crashes during pairwise shift computation (in the hdf5lib).
Hi @tpietzsch, do you maybe have any idea what could cause the above exception. The HDF5 is written from python and there are type conversion problems which should be fixed. However, I am puzzled with this concurrency exception too ... maybe it just vanishes once the type is fixed. Thank you so much!
I just had a look. It seems that @constantinpape has identified and addressed this issue of mapping types between numpy and java in his code here, including value range adjustments.
@VolkerH,. yes, I implemented this a while ago and it should take care of shifting the value ranges from the unsigned representation [0...65535] to the signed representation [-32k...32k] or throw a value error if the input data does not fit the value range. This works for me, but I haven't checked this extensively for different data type combinations. If you use this code and find any issues, please let me know.
Fixed the uint16->int16 conversion in my code, npy2bdv. Thanks everyone for helpful discussion.
@nvladimus thanks, will test in the coming days and raise an issue in your repo if problems arise. I know how to work around it on the Python side when generating the file. It just confused me that BDV would interpret and display the generated file correctly but the phase correlation code would throw an exception.
I have recently been using https://github.com/nvladimus/npy2bdv from @nvladimus to create Big Stitcher xml/h5 files directly from Python. I have been doing this with various datatypes (uint8, uint16, int32) etc. All of these projects read fine in Big Stitcher and display fine in Big Data Viewer.
However, there is an issue with some data types during pairwise stitching. If any values in the dataset are outside the range 0...32768 a type conversion that appears to happen during the pairwise phase shift calculation throws an Exception. Now that I found it, I can work around it by clipping the values to the allowed range but ideally this would be fixed in the BigStitcher code.