Open vreuter opened 11 months ago
NB: the .nd2 metadata parse accompanies the conversion to .zarr. This is at the beginning of the pipeline and could thus inform all subsequent steps.
Here's an example metadata:
P0001.zarr/.zattrs
{
"metadata": {
"channel_0": {
"emissionLambdaNm": 700.5,
"excitationLambdaNm": null,
"name": "Far Red"
},
"channel_1": {
"emissionLambdaNm": 630.0,
"excitationLambdaNm": 561.0,
"name": "Red"
},
"microscope": {
"immersionRefractiveIndex": 1.515,
"modalityFlags": [
"fluorescence",
"camera"
],
"objectiveMagnification": 60.0,
"objectiveName": "Plan Apo \u03bb 60x Oil",
"objectiveNumericalAperture": 1.4,
"zoomMagnification": 1.0
},
"voxel_size": [
0.3,
0.107325563330673,
0.107325563330673
]
},
"multiscales": [
{
"axes": [
{
"name": "t",
"type": "time",
"unit": "minute"
},
{
"name": "c",
"type": "channel"
},
{
"name": "z",
"type": "space",
"unit": "micrometer"
},
{
"name": "y",
"type": "space",
"unit": "micrometer"
},
{
"name": "x",
"type": "space",
"unit": "micrometer"
}
],
"datasets": [
{
"coordinateTransformations": [
{
"scale": [
1.0,
1.0,
0.3,
0.107325563330673,
0.107325563330673
],
"type": "scale"
}
],
"path": "0"
}
],
"name": "seq_images_zarr_P0001.zarr",
"version": "0.4"
}
]
}
Note also that this metadata is available for each FOV, and therefore would work nicely even if we ultimately entirely parallelise across FOV (#75 )
Newer version: here's what we could infer
t
,c
,z
,y
,x
)Earlier version:
This would share code from
image_io.stack_nd2_to_dask
andimage_io.stack_tif_to_dask
Thenzarr_conversions
could be parsed directly from config, obviating the need for building anImageHandler
inconvert_datasets_to_zarr.py