tischi / i2k-2020-s3-zarr-workshop

0 stars 1 forks source link

Add AffineTransform to ome.zarr #13

Open tischi opened 3 years ago

tischi commented 3 years ago

@constantinpape @joshmoore I started coding and I am optimistic that I will manage to read the zarr files directly, without using the xml. Could we please (even without having a specification yet) add the affine transform to the zarr? Because, if we do this we can show in the workshop that one can show images with different scales on top of each other. Thereby making the point that the NGFF will be truly multi-scale.

constantinpape commented 3 years ago

Fine with me. I would propose we just add another field called transformation or affineTransformation to the multiscale group attributes. And then we should also add the resolution there for completenes.

If you agree, I can add this for our platy example datasets. And let me know if you have any preferences how these fields should be called.

tischi commented 3 years ago

@joshmoore Could you please tell us how to do this such that it has a chance to directly make it from a prototype into a specification?

joshmoore commented 3 years ago

Most important is probably to not conflict with any value that others are using. So either pick a name that no one else is using, or pick a hole prototype that someone else is using (e.g. https://open.quiltdata.com/b/janelia-cosem/tree/jrc_hela-2/jrc_hela-2.n5/em/fibsem-uint16/attributes.json)

constantinpape commented 3 years ago

I like the transformation spec in https://open.quiltdata.com/b/janelia-cosem/tree/jrc_hela-2/jrc_hela-2.n5/em/fibsem-uint16/attributes.json. What do you think @tischi?

tischi commented 3 years ago

Looks good! How would you add the affine transform to this? Add a transformMatrix? And have the translate separate? The other question is how the scale should be handled, because in bdv it is inside the transformMatrix...

constantinpape commented 3 years ago

Looks good! How would you add the affine transform to this? Add a transformMatrix? And have the translate separate? The other question is how the scale should be handled, because in bdv it is inside the transformMatrix...

The way I understand this is that you have the different parts of the affine separately. scale gives you the scale factors translation the translations and then there could also be rotation and shear. You would then need to build the transformation matrix out of this on the java side.

tischi commented 3 years ago

Let's try!

constantinpape commented 3 years ago

I decided to deviate a tiny bit from the example and put the transform under the top-level dictionary (map) instead of for each of the datasets individually. I think this is closer to what bigdataviewer expects and should still be fine in terms of compatibility because we use the same fields for transform.

I have updated all three zarrs on the embl.s3. Here is how the metadata looks for the myosin one:

{
    "multiscales": [
        {
            "datasets": [
                {
                    "path": "s0"
                },
                {
                    "path": "s1"
                },
                {
                    "path": "s2"
                },
                {
                    "path": "s3"
                }
            ],
            "name": "prospr-myosin",
            "pixelResolution": {
                "dimensions": [
                    0.55,
                    0.55,
                    0.55
                ],
                "unit": "micrometer"
            },
            "scales": [
                [
                    1.0,
                    1.0,
                    1.0
                ],
                [
                    2.0,
                    2.0,
                    2.0
                ],
                [
                    4.0,
                    4.0,
                    4.0
                ],
                [
                    8.0,
                    8.0,
                    8.0
                ]
            ],
            "transform": {
                "axes": [
                    "x",
                    "y",
                    "z"
                ],
                "scale": [
                    0.55,
                    0.55,
                    0.55
                ],
                "translate": [
                    0.0,
                    0.0,
                    0.0
                ],
                "units": [
                    "micrometer",
                    "micrometer",
                    "micrometer"
                ]
            },
            "version": "0.1"
        }
    ]
}
tischi commented 3 years ago

under the top-level dictionary (map) instead of for each of the datasets individually

In fact I was wondering about this, probably up to discussion, but for now I like your choice! Thanks a lot!

tischi commented 3 years ago

@joshmoore @constantinpape Do we really want this for each dimension?

"units": [
                    "micrometer",
                    "micrometer",
                    "micrometer"
                ]
constantinpape commented 3 years ago

@joshmoore @constantinpape Do we really want this for each dimension?

I also found this weird, but I wanted to stay consistent with the example.

tischi commented 3 years ago

@joshmoore @constantinpape I am trying to read what Constantin did into a class directly, using the JsonParser, but I do not get the syntax right. Any ideas? What I currently have is (throws and error):

    class MultiScales
    {
        MultiScale[] multiscales;
    }

    class MultiScale
    {
        String name;
        Transform transform;
        // add more
    }

    class Transform
    {
        String[] axes;
        double[] scale;
        double[] translate;
        String[] units;

    }

I want to do:

MultiScales multiScales = n5.getAttribute( pathName, "multiscales", MultiScales.class );
joshmoore commented 3 years ago

Do we really want this for each dimension?

Dimension support will eventually be important, but for the demo I don't think it's critical.

Any ideas (on the JsonParser)?

Not I.

tischi commented 3 years ago

This works:

MultiScale[] multiScales = n5.getAttribute( pathName, "multiscales", MultiScale[].class );

Not sure why the above doesn't, but that's fine.

tischi commented 3 years ago

Works! We are multi-scale 🥳

OMEZarrS3Reader reader = new OMEZarrS3Reader( "https://s3.embl.de", "us-west-2", "i2k-2020" );
SpimData em = reader.readSpimData( "em-raw.ome.zarr" );
BdvHandle bdvHandle = BdvFunctions.show( em ).get( 0 ).getBdvHandle();
SpimData myosin = reader.readSpimData( "prospr-myosin.ome.zarr" );
BdvStackSource< ? > bdvStackSource = BdvFunctions.show( myosin, BdvOptions.options().addTo( bdvHandle ) ).get( 0 );
bdvStackSource.setColor( new ARGBType( ARGBType.rgba( 255, 0,0, 255 ) ) );

image

tischi commented 3 years ago

@NicoKiaru

If you look at above code, do you think it is possible to change the converter after adding it to BDV? I think it was not possible to change the Converter of a bdvStackSource, is it? So essentially something like this

bdvStackSource.setColor( new ARGBType( ARGBType.rgba( 255, 0,0, 255 ) ) );

but exchanging the whole Converter. If this was possible then it would be not such a big deal and maybe also a nice way to do things (EDIT: for example to add a nice Converter for label mask images).

What do you think?

NicoKiaru commented 3 years ago

Not that I know of, but it's more a question for Tobias.

I've never really understood the subtleties behind the creation of the converter, I just copied what was done in bdvfunctions (https://github.com/bigdataviewer/bigdataviewer-playground/blob/9487942627bf3eb5acf7e147b904423edba9aafc/src/main/java/sc/fiji/bdvpg/sourceandconverter/SourceAndConverterUtils.java#L439 )

tischi commented 3 years ago

@joshmoore @constantinpape @will-moore

What about specifying the scales like this?

"datasets": [
                {
                    "path": "s0",
                    "scale": [ 1.0, 1.0, 1.0 ]
                },
                {
                    "path": "s1",
                    "scale": [ 2.0, 2.0, 2.0 ]
                },
                {
                    "path": "s2",
                    "scale": [ 4.0, 4.0, 4.0 ]
                },
                {
                    "path": "s3",
                    "scale": [ 8.0, 8.0, 8.0 ]
                }
            ],

...or is the idea to compute them from the datasets dimensions?

joshmoore commented 3 years ago

This is quite similar to what I had originally in the multiscales spec, but there were several different proposals.