mobie / mobie.github.io

1 stars 3 forks source link

Proposal: structure of a source-pairwise registration definition #32

Open martinschorb opened 3 years ago

martinschorb commented 3 years ago

ping @NicoKiaru

The aim of this proposal is to provide means of defining pairwise registrations of sources.

In case of grouping sets of sources into a common viewing scenario (multi-modal experiment). These would be applied based on a ruleset defined by the viewer/user. (different registrations can be competitive/contradicting)

I would keep the transformation specs consistent with what is agreed on/discussed here: https://github.com/ome/ngff/issues/28

Also, each source is considered to provide a base_transform that places it into physical space

The registration spec should be stored with each data source. It would link it to a reg_target. Ideally a registration algorithm would populate both the metadata of the "moving" source and the "fixed" source.

reg_target would be defined as a relative path pointing to the target location (BDV-XML, OME-Zarr, path inside h5/n5/zarr/..., ...)

Each registration spec should consist of (labels open for discussion):

An example:


{
"reg_target": "../exp2_LightMicroscopy_40x_01.xml",
"registrations": {
  [
   {"reg_nascency": "image-based",
    "reg_originator":"Amira",
    "reg_coordinatebase": "voxel",
    "reg_tf_type": "affinetransform3d",
    "transformation": {"type" : "affinetransform3d",
      "parameters": [
       1.7209895751990397E-4,
       0.0,
       0.0,
       -82.33230862710636,
       0.0,
       1.720993359636627E-4,
       0.0,
       -65.69365104482824,
       0.0,
       0.0,
       1.0,
       0.0
     ]
    }
    ,
    {"reg_nascency": "landmark-based",
    "reg_originator":"BigWarp",
    "reg_coordinatebase": "physical",
    "reg_tf_type": "wrappedTransform",
    "transformation": {"type" : "ThinplateSplineTransform",
      "parameters": {"srcPts": [
          [
            -81.60433025717083,
            -75.9510412016272,
            -75.59189107339267,
            -81.20527455913245
          ],
          [
            -64.849069200315,
            -63.13312969875,
            -60.49936209169673,
            -60.12691010686091
          ]
        ],
        "tgtPts": [
          [
            -81.72869251808957,
            -76.1270376177869,
            -75.81824863770494,
            -81.42426694879573
          ],
          [
            -64.66067129970858,
            -62.88538725994947,
            -60.255555408664456,
            -59.94763265821272
          ]
        ]
     }
    ]
  },
{
"reg_target": "../exp2_EM_map3.xml",
"registrations": {
  [...]
}
}
...
martinschorb commented 3 years ago

There should be a priority setting (in the viewer) that defines which registration is considered if there are conflicts/multiple registrations for a source pair.

in general: image-based > landmark-based > acquisition-based

if there is a special originator (say ec-CLEM), this could also get priority over the general nascency.

tischi commented 3 years ago

@martinschorb

I wanted to bring this repo to your attention: https://github.com/image-transform-converters/image-transform-converters/tree/master/src/main/java/itc/converters If you look a bit into the code you can see what information is necessary for certain transformations to be converted into others.

I think your spec looks like a good start. To fit this into MoBIE you would have to think how to make this fit the sourceTransformation spec.

martinschorb commented 3 years ago

Perfect, I am totally in for using these transformation definitions. The key thought here should focus on how to pick the available transforms given that registrations are always defined for source pairs.

martinschorb commented 3 years ago

A scenario: Source A0 and source B0 are linked using several different registrations from different originators.

I want to view something in source B2 which is the same modality as B0 in context (and showing an overlay) of A0.

Now, how will the transformation of source A0 be defined?

martinschorb commented 3 years ago

In this example scenario: MuMoReg

When I want the viewer to show the "Tomo anchor map" (EM) on top of the "LowMag Overview" (LightMic) with the Light Microscopy as the "active/primary source", I need to apply all the transformations indicated with the yellow arrows for the EM source and the green arrow for the Light microscopy.

Depending on the preferred registration method, I could also have chosen the "image/feature-based registration" between "Higher mag map" (EM) and "Higher mag image" (Light mic.) instead of the "landmark-based".

MuMoReg_view_path1

martinschorb commented 3 years ago

@constantinpape @tischi Do we have a project where the "view" feature based on JSON metadata is already implemented? As discussed, I would like to use that as a "static" entry point to the general concept described here.

constantinpape commented 3 years ago

@martinschorb I will update one or two projects tomorrow and send the links here.

tischi commented 3 years ago

@martinschorb

In your current proposal I see only one source, namely the "reg_target"; since you were talking about pair-wise registrations I would have expected two?!

You have several transformations for one source. As discussed, the easiest entry point to MoBIE would be if you would define different views where for a couple of sources the optimal transformations are chosen (hard-coded in the view) to optimally show those sources in a specific context. Then the user can switch between different views manually. I would recommend to start with this, because writing out those views we will probably already learn something about the criteria of which transformations to pick in which context.

Then, in a second step, we could think about a way that the viewer would (semi-)automatically switch the view, depending on some criteria that we would need to define.

martinschorb commented 3 years ago

In your current proposal I see only one source, namely the "reg_target"; since you were talking about pair-wise registrations I would have expected two?!

If we go for one central repository of registrations, we would need to store them pairwise. In the proposal, my idea is to provide the registrations as part of each source's metadata. That would lead to duplicated metadata for reversible transforms but have the advantage that they stay linked to the source.

There are pros and cons for either of these approaches and we should definitely discuss, whether we want to keep the registration information with each source or as a single array-type storage entity as part of the description of multi-modality.

constantinpape commented 3 years ago

@martinschorb I have pushed the updated clem-yeast project onto a branch now: https://github.com/mobie/yeast-clem-datasets/tree/spec-v2 The main json is here: https://github.com/mobie/yeast-clem-datasets/blob/spec-v2/data/yeast/dataset.json

martinschorb commented 3 years ago

I have a few questions about the dataset structure.

martinschorb commented 3 years ago

Also, I struggle to find the communication on how to install the development version of MoBIE. Is there a docpage for that that I could not find? I am happy to write one if needed.

tischi commented 3 years ago

Also, I struggle to find the communication on how to install the development version of MoBIE. Is there a docpage for that that I could not find? I am happy to write one if needed.

The dev version is on MoBIE-beta. If that's not described yet in the docs would be indeed nice if you could add it!

However, the current specs are nowhere because this is too unstable even for MoBIE beta. If you want to play with this you need to run from IntelliJ. In fact also for this it would be nice to have a documentation.

constantinpape commented 3 years ago
  • Where exactly would the different transformation information go?

In the new spec, the most natural place would be in datasets.json:views and then each transformation would correspond to a view in there, according to this spec. If you would rather have this in a non-centralized way and/or in a different format feel free to propose something and we can see what works best. (But in general we would want as few additional things as possible in xml, because json is so much more convenient.)

  • Right now, each source is displayed according to their XML as far as I can see. Can I define bookmarks that individually transform selected sources (append an extra transformation to what is defined in the XML)? What would be the syntax?

I am not sure exactly what you mean by this. A source is loaded according to the xml AND the additional metadata in the view. For additional transformations you can use the sourceTransformations, again see the spec for details.

  • Can I provide a multi-channel source (like in BDV-PG) that shares the same transformation?

There are currently no multi-channel sources, @tischi would know more about bridging the compatibility with this feature. But, it is possible to specify a transformation for multiple sources via the list of sources in sourceTransfomations, see above.

Also, I struggle to find the communication on how to install the development version of MoBIE. Is there a docpage for that that I could not find? I am happy to write one if needed.

Yes, it would be great to have a docpage for advanced users in mobie.github.io that explains how to a) install from the MoBIE-beta update page and b) install a dev version from intelliJ. To make a new docpage, you would just need to add the corresponding markdown here and then link to it in here.

@tischi

However, the current specs are nowhere because this is too unstable even for MoBIE beta.

I have the platybrowser data almost ready for the new spec now (gonna write you a mail once it's all pushed). So from my side we could push something to MoBIE beta and ask people to test it whenever you're ready.

martinschorb commented 3 years ago

Probably a good starting scenario to do a static test:

(from https://github.com/mobie/mobie-viewer-fiji/issues/244)

How could I define these global display/view scenarios (should we call them "scenes" to have clear nomenclature?)?
I was thinking of the "NormView" definition as used in bookmarks but instead of refering to the viewer window it would in this case refer to the grid cell.

constantinpape commented 3 years ago

Probably a good starting scenario to do a static test:

* many 3D sources, all have slightly different dims

* each source has one target view (like a bookmark)

* we would like two display scenarios:
  a)   a grid view with each source in its original orientation
  b)   a grid view with each source shown in its target view

(from mobie/mobie-viewer-fiji#244)

I agree, that's a good starting point.

How could I define these global display/view scenarios (should we call them "scenes" to have clear nomenclature?)? I was thinking of the "NormView" definition as used in bookmarks but instead of refering to the viewer window it would in this case refer to the grid cell.

I think "scene" is a good name to clearly demark this from other things in the spec. But note that "normView" is deprecated. In the new spec we have "normalizedAffine", which is a possible "viewerTransform", see https://mobie.github.io/specs/mobie_spec.html#view-metadata. Note that view refers to the full viewer state now.

To define the different scenes you could just have a view per scene (simplified):

{
"scene1": {
  "viewerTransform": {"normalizedAffine": [...]}
},
"scene2": {
  "viewerTransform": {"normalizedAffine": [...]}
}
}

The question is how we "register" these scenes with the viewer then. I see two different options for this:

I think how to do this best still depends on how exactly we want this to map to the UI.

martinschorb commented 3 years ago

Do you have a spec2 dataset that already has bookmarks somewhere? I could not find any...

Essentially the "scene" syntax would behave like a bookmark, just it will be source-specific. I think, a viewer transform is probably what will be needed for this functionality, however we need to jump from physical space to viewspace back and forth. The only realistic sequence of transformations from source voxel to viewer pixel in my view would be:

source.base_transform (voxel to physical) -> source.scene_transform (physical to viewer)

here the scene would simply replace the bookmark and in a single-tile-viewgrid case we are done. But then we need to include the grid:

-> grid_place_transform (viewer to physical) -> bookmark (physical to viewer)

In case there is no grid, grid_place_transform would just be the inverse of the selected bookmark and the scene_transform target canvas is simply the entire viewer. In a grid case, the scene will behave like a bookmark inside the grid tile, which is then positioned by the grid_place_transform in "real space" and the bookmark will bring it back to the viewer.

Does this concept make sense?

martinschorb commented 3 years ago

In fact, if we want to use the "scene" also to represent variable registrations as outlined above, it would need to live in physical to physical space. This would require the grid to define each tile in physical space. Is this how it is implemented right now? The grid lives in physical space...

In this case, the sequence of transformations would be:

source.base_transform (voxel to physical) -> source.scene_transform (physical to PHYSICAL)

-> grid_place_transform (PHYSICAL to physical) -> bookmark (physical to viewer)

And the scene(s) would not be represented as bookmark-like viewerTransform but rather a standard coordinate transform.

martinschorb commented 3 years ago

Another option would be to split the two use cases.

That would be more clear and intuitive but would introduce an additional layer...


source.base_transform (voxel to physical) -> source.registration_transform (physical to physical) ->source.scene_transform (physical to viewer [relative to tile] )

-> grid_place_transform (viewer [tile] to physical) -> bookmark (physical to viewer)
martinschorb commented 3 years ago

We would basically have 3 layers of transform for each source

on the viewer side (globally) we would have:

I am still not sure how the grid placement should behave in a single-tile case (no grid)...

martinschorb commented 3 years ago

And the grid placement would need to make sure the original voxel size is maintained. Ouch, this is getting really convolved...

constantinpape commented 3 years ago

@martinschorb you can find spec-v2 projects with bookmarks here: https://github.com/mobie/arabidopsis-root-lm-datasets https://github.com/mobie/platybrowser-datasets https://github.com/mobie/yeast-clem-datasets https://github.com/mobie/zebrafish-lm-datasets

The bookmarks are now in dataset.json:views.

For all of them the spec-v2 version is on the spec-v2 branch. Note that the platybrowser one doesn't fully work with the viewer yet, I don't know yet if that's an issue with the viewer or the data.

I will try to have a look at the transformation things later.

constantinpape commented 3 years ago

I think there is a much simpler solution for all this, which is covered by the current spec already: you can use sourceTransforms to define a transformation per source. (This is similar to the transformations defined in the bdv.xml, and it will be applied on top of it). This transformation is always defined in the coordinate space of the source, so there is no need to intermingle it with the grid placement; and it works together with a grid. So as an example for two sources in a grid and one view scene1 it would look something like this (simplified):

{"views": [
"scene1": {
  "sourceDisplays": {"imageDisplay": {sources: ["im1", "im2"]}},
  "sourceTransforms": {
    "affine": {"parameters: "[...], "sources": ["im1"]},  # registration for the first source
    "affine": {"parameters: "[...], "sources": ["im2"]},  # registration for the second source
    "grid": {"sources": ["im1", "im2"]}
  }
}
]
}

And more views could be defined in the same way, just using different parameters for the affines.

As far as I can see that covers everything you describe above, without introducing the "scene" as an additional argument. Now, the remaining question is how to map this to the UI. Currently, this would just result in a bookmark entry for each view.

martinschorb commented 3 years ago

Sounds good. There's a few questions to this:

This scenario would illustrate it:

constantinpape commented 3 years ago
  • if iI understand it correctly, sourceTransforms map from physical to physical space.

This is not fully defined. I think it's a good convention to do "pixelSpace"->"physicalSpace" in the xml (i.e. only a scale transform) and then do "physical"->"physical" (i.e. an affine that rotates and translates) in the source transforms. But in the spec itself I would rather not introduce these distinctions.

  • so does grid, correct?

Grid arranges the sources in a grid. This is done in the space that results after applying the preceding transforms. So yes, for most practical purposes this will be in physical space.

  • grid always displays the full individual volumes. How does it deal with volumes that have different sizes?

I think we haven't fully solved this case yet. There are two options: find an optimal packing for the different shapes or take the maximal shape of the individual sources.

For those, would it make sense to also have a gallery of viewers? In fact, my "grid" idea illustrated above was going more into this direction rather than the grid as it is implemented right now.

This scenario would illustrate it:

* one volume/source (say the platy)

* I have bookmarks for all brezel-shaped nuclei. (n=25)

* I like to show all of these next to each other in a gallery of 5x5 viewers. Each viewer would be an independent BDV showing the respective bookmark. Ideally the controls of these BDVs would be linked (BDV-PG can do this) and they would appear in one common viewer-gallery window instead of individual windows.

I don't think we wan't to support multiple viewers for now, because this would complicate the java code extremely. But this is ultimately up to @tischi, who is the only one who can really judge this, maybe it would not be as difficult any more with BDV-PG. (Supporting this on the spec side would probably be easy and just mean another flag.)

But note that what you describe is also possible in a single viewer: we allow duplication of sources in the grid view already. And as far as I remember previous discussion it should also then be possible to just navigate within a single source using BDV-PG code. (The only thing where I am not so sure right now how we would specify that different transformations are applied to the same source at different grid positions, but we can figure that out and it might even be possible in the current spec proposal and I am just not seeing how to do this.)

martinschorb commented 3 years ago

But note that what you describe is also possible in a single viewer: we allow duplication of sources in the grid view already.

This could be one way to go. In a gallery scenario where multiple bookmarks are shown, would you be able to place a mask to restrict the individual source views to not overlap with the neighbours? Otherwise the big source volume would be shown many times on top of each other but in different orientations.

constantinpape commented 3 years ago

In a gallery scenario where multiple bookmarks are shown, would you be able to place a mask to restrict the individual source views to not overlap with the neighbours? Otherwise the big source volume would be shown many times on top of each other but in different orientations.

I don't really know what you mean by mask here. Do you mean showing only a cutout of the full dataset? In any case, I don't think that's supported yet.

martinschorb commented 3 years ago

I don't really know what you mean by mask here. Do you mean showing only a cutout of the full dataset? In any case, I don't think that's supported yet.

Yes, that would be desired. The big advantage of the gallery is that multiple target objects can be viewed next to each other. What bookmarks do: focus the view to an object. Now we like to combine these two. Showing many objects side-by-side, no matter whether they come from different sources or the same source.

I can see two ways of implementing this:

constantinpape commented 3 years ago

What bookmarks do: focus the view to an object.

Not really. In our current spec bookmarks are views, which store the complete viewer state. This can be used to focus on an object of interest, but supports much more than that.

Now we like to combine these two. Showing many objects side-by-side, no matter whether they come from different sources or the same source.

Yes, as we have discussed, that's already supported by the grid-view. You can have a look of how this currently works in the following project: https://github.com/mobie/zebrafish-lm-datasets/tree/spec-v2. You need to use MoBIE2; each dataset contains a grid-view and small-grid-view bookmark.

I can see two ways of implementing this:

* a window that contains multiple viewers in a tiled gallery that show a single bookmark each. These viewers would need to be synchronized in such a way, that the display scale is consistent.

* masks/clipping planes that restrict the visibility of a source to a certain FOV. Then the different bookmarks could be shown as separate sources inside one common viewer. The mask is required to eliminate overlap.

Ok, I understand now. And indeed this should be handled somehow.

Regarding the first option (again as mentioned above): @tischi would need to comment if that's feasible in our current viewer model or not. From the spec perspective it would be pretty simple to do, I would probably introduce a new type of transformation gallery for this.

Regarding the second option: ok, I would not call this mask (because we already use this to refer to pixel masks elsewhere), rather regionOfInterest. Easy to include in the spec; but I am not sure what this would involve on the viewer side.

martinschorb commented 3 years ago

You can have a look of how this currently works in the following project: https://github.com/mobie/zebrafish-lm-datasets/tree/spec-v2

Where exactly can I find the bookmark specs? Are these included in the grid-view tables? I could not find bookmark JSON files anywhere.

martinschorb commented 3 years ago

is it all in the respective dataset.json?

constantinpape commented 3 years ago

Yes, it's in dataset.json:views. See https://github.com/mobie/zebrafish-lm-datasets/blob/spec-v2/data/actin/dataset.json#L1600. Note that additional bookmark files in misc/bookmarks are still supported, but they are not loaded in the viewer by default and must be loaded via the context menue.

constantinpape commented 3 years ago

44 allows specifying the crop as a source transform now. @tischi still needs to implement this in the viewer.

@martinschorb could you generate an example for this we could use for testing? Otherwise I could see if I can generate something in one of our example projects.

tischi commented 3 years ago

Let me know once there is something and I will then try to implement it. I already checked with @NicoKiaru and there should be code in the playground that could be used/adapted for this.

martinschorb commented 3 years ago

I put one test project in /g/emcf/schorb/data/mobie_crop It should be write accessible for all of you to play. I created the spec2 jsons manually and it is crashing for me, probably I made a mistake somewhere. The crop coordinates are correct though...

constantinpape commented 3 years ago

@tischi @martinschorb I added an example project with a crop bookmark based on the data from martin: /g/emcf/pape/mobie-test-projects/mobie_crop

tischi commented 3 years ago

I am taking Friday off, but I think I will get to it early next week!