Closed tpietzsch closed 1 year ago
https://docs.hdfgroup.org/hdf5/develop/group___p_d_t_s_t_d.html
I wonder if STD data types might be better than NATIVE ?
the datatype must be added as an attribute to each setup group
I guess that is within the HDF5 file? And the XML files don't have any changes compared to the previous version?
@constantinpape do you have python code that could generate a BDV HDF5/XML with uint64
?
@tpietzsch how should we test this? I guess pull this branch and use new XmlIoSpimData().load( )
on some new datatypes?
the datatype must be added as an attribute to each setup group
I guess that is within the HDF5 file? And the XML files don't have any changes compared to the previous version?
yes, exactly.
@tpietzsch how should we test this? I guess pull this branch and use
new XmlIoSpimData().load( )
on some new datatypes?
yes, and then also display it, so that it actually accesses the image data.
https://docs.hdfgroup.org/hdf5/develop/group___p_d_t_s_t_d.html
I wonder if STD data types might be better than NATIVE ?
Can you be a bit more specific?
Are you refering to this https://github.com/bigdataviewer/bigdataviewer-core/blob/ec7be0f7fd6ec3143fcebb8371226a20d3d58e9b/src/main/java/bdv/img/hdf5/HDF5Access.java#L258-L285
I got this from digging into the jhdf5 code. I think it makes more sense for this to be NATIVE, because this describes the type of the memory space. And being a primitive array, that should follow the endianness of the machine it's running on?
@tischi: yes, I have python code that can write this. I had to adapt it a bit though in order to write the dtype metadata, see https://github.com/constantinpape/pybdv/pull/49.
I have created an example file, will send you the location via embl chat.
@constantinpape your uint64
example image opens, but it is a bit hard to judge whether it opens correctly, due to the contrast limits.
Do you happen to know the value range?
Do you happen to know the value range?
The value range is 0 to 4294967437. (I offset the non-zero values with the uint32 max id to have actual uint64 data).
I have added test-data-uint64-v2.xml
in the same folder without these offsets, where the max value is 133, in case that helps.
ok, this works:
min = 4294967295
max = 4294967437
@tpietzsch Looks like it works (at least for uint64
written from python) 🥳
@tischi Great! Thanks.
HDF5 export should be updated to support more datatypes (it still does the legacy thing).
This is done now. https://github.com/bigdataviewer/bigdataviewer_fiji/pull/31
This pull request has been mentioned on Image.sc Forum. There might be relevant details there:
https://forum.image.sc/t/bigstitcher-image-fusion-produces-black-bars/85726/14
This addresses #154, adding more datatypes to
Hdf5ImageLoader
For this to work, the datatype must be added as an attribute to each setup group. E.g., the info (pyramid resolutions, etc) for the first setup is under group "/s00" in the h5 file. If "/s00" has a "dataType" string attribute with value "uint8", that means that the datasets for this setup are
UnsignedByteType
. The values for the "dataType" attribute are the same as for N5. See DataType labels andtoString()
.If no "dataType" attribute is present, it is assumed that the datasets for this setup are legacy
UnsignedShortType
-stored-as-INT16
.Supported datatypes ("dataType" attribute values) are: "int8" (
ByteType
), "uint8" (UnsignedByteType
), "int16" (ShortType
), "uint16" (UnsignedShortType
), "int32" (IntType
), "uint32" (UnsignedIntType
), "int64" (LongType
), "uint64" (UnsignedLongType
), "float32" (FloatType
), "float64" (DoubleType
)@tischi, @StephanPreibisch Could you test/review this?
There are two open issues:
Hdf5ImageLoader
respectedImgLoaderHints.LOAD_COMPLETELY
to load datasets fully into an ArrayImg if possible. This is not supported currently, but I don't know how relevant it still is.I'll create separate issues for those.