Open tischi opened 3 years ago
Fine with me.
I would propose we just add another field called transformation
or affineTransformation
to the multiscale group attributes. And then we should also add the resolution there for completenes.
If you agree, I can add this for our platy example datasets. And let me know if you have any preferences how these fields should be called.
@joshmoore Could you please tell us how to do this such that it has a chance to directly make it from a prototype into a specification?
Most important is probably to not conflict with any value that others are using. So either pick a name that no one else is using, or pick a hole prototype that someone else is using (e.g. https://open.quiltdata.com/b/janelia-cosem/tree/jrc_hela-2/jrc_hela-2.n5/em/fibsem-uint16/attributes.json)
I like the transformation spec in https://open.quiltdata.com/b/janelia-cosem/tree/jrc_hela-2/jrc_hela-2.n5/em/fibsem-uint16/attributes.json. What do you think @tischi?
Looks good! How would you add the affine transform to this? Add a transformMatrix
? And have the translate
separate? The other question is how the scale
should be handled, because in bdv it is inside the transformMatrix
...
Looks good! How would you add the affine transform to this? Add a
transformMatrix
? And have thetranslate
separate? The other question is how thescale
should be handled, because in bdv it is inside thetransformMatrix
...
The way I understand this is that you have the different parts of the affine separately. scale
gives you the scale factors translation
the translations and then there could also be rotation
and shear
.
You would then need to build the transformation matrix out of this on the java side.
Let's try!
I decided to deviate a tiny bit from the example and put the transform
under the top-level dictionary (map) instead of for each of the datasets individually. I think this is closer to what bigdataviewer expects and should still be fine in terms of compatibility because we use the same fields for transform
.
I have updated all three zarrs on the embl.s3. Here is how the metadata looks for the myosin one:
{
"multiscales": [
{
"datasets": [
{
"path": "s0"
},
{
"path": "s1"
},
{
"path": "s2"
},
{
"path": "s3"
}
],
"name": "prospr-myosin",
"pixelResolution": {
"dimensions": [
0.55,
0.55,
0.55
],
"unit": "micrometer"
},
"scales": [
[
1.0,
1.0,
1.0
],
[
2.0,
2.0,
2.0
],
[
4.0,
4.0,
4.0
],
[
8.0,
8.0,
8.0
]
],
"transform": {
"axes": [
"x",
"y",
"z"
],
"scale": [
0.55,
0.55,
0.55
],
"translate": [
0.0,
0.0,
0.0
],
"units": [
"micrometer",
"micrometer",
"micrometer"
]
},
"version": "0.1"
}
]
}
under the top-level dictionary (map) instead of for each of the datasets individually
In fact I was wondering about this, probably up to discussion, but for now I like your choice! Thanks a lot!
@joshmoore @constantinpape Do we really want this for each dimension?
"units": [
"micrometer",
"micrometer",
"micrometer"
]
@joshmoore @constantinpape Do we really want this for each dimension?
I also found this weird, but I wanted to stay consistent with the example.
@joshmoore @constantinpape I am trying to read what Constantin did into a class directly, using the JsonParser, but I do not get the syntax right. Any ideas? What I currently have is (throws and error):
class MultiScales
{
MultiScale[] multiscales;
}
class MultiScale
{
String name;
Transform transform;
// add more
}
class Transform
{
String[] axes;
double[] scale;
double[] translate;
String[] units;
}
I want to do:
MultiScales multiScales = n5.getAttribute( pathName, "multiscales", MultiScales.class );
Do we really want this for each dimension?
Dimension support will eventually be important, but for the demo I don't think it's critical.
Any ideas (on the JsonParser)?
Not I.
This works:
MultiScale[] multiScales = n5.getAttribute( pathName, "multiscales", MultiScale[].class );
Not sure why the above doesn't, but that's fine.
Works! We are multi-scale 🥳
OMEZarrS3Reader reader = new OMEZarrS3Reader( "https://s3.embl.de", "us-west-2", "i2k-2020" );
SpimData em = reader.readSpimData( "em-raw.ome.zarr" );
BdvHandle bdvHandle = BdvFunctions.show( em ).get( 0 ).getBdvHandle();
SpimData myosin = reader.readSpimData( "prospr-myosin.ome.zarr" );
BdvStackSource< ? > bdvStackSource = BdvFunctions.show( myosin, BdvOptions.options().addTo( bdvHandle ) ).get( 0 );
bdvStackSource.setColor( new ARGBType( ARGBType.rgba( 255, 0,0, 255 ) ) );
@NicoKiaru
If you look at above code, do you think it is possible to change the converter after adding it to BDV?
I think it was not possible to change the Converter
of a bdvStackSource, is it?
So essentially something like this
bdvStackSource.setColor( new ARGBType( ARGBType.rgba( 255, 0,0, 255 ) ) );
but exchanging the whole Converter
. If this was possible then it would be not such a big deal and maybe also a nice way to do things (EDIT: for example to add a nice Converter
for label mask images).
What do you think?
Not that I know of, but it's more a question for Tobias.
I've never really understood the subtleties behind the creation of the converter, I just copied what was done in bdvfunctions (https://github.com/bigdataviewer/bigdataviewer-playground/blob/9487942627bf3eb5acf7e147b904423edba9aafc/src/main/java/sc/fiji/bdvpg/sourceandconverter/SourceAndConverterUtils.java#L439 )
@joshmoore @constantinpape @will-moore
What about specifying the scales like this?
"datasets": [
{
"path": "s0",
"scale": [ 1.0, 1.0, 1.0 ]
},
{
"path": "s1",
"scale": [ 2.0, 2.0, 2.0 ]
},
{
"path": "s2",
"scale": [ 4.0, 4.0, 4.0 ]
},
{
"path": "s3",
"scale": [ 8.0, 8.0, 8.0 ]
}
],
...or is the idea to compute them from the datasets dimensions?
This is quite similar to what I had originally in the multiscales spec, but there were several different proposals.
@constantinpape @joshmoore I started coding and I am optimistic that I will manage to read the zarr files directly, without using the xml. Could we please (even without having a specification yet) add the affine transform to the zarr? Because, if we do this we can show in the workshop that one can show images with different scales on top of each other. Thereby making the point that the NGFF will be truly multi-scale.