catalystneuro / ndx-microscopy

An enhancement to core NWB schema types related to microscopy data.
MIT License
0 stars 0 forks source link

[Discussion] Extension's Structure Proposal #4

Open alessandratrapani opened 8 months ago

alessandratrapani commented 8 months ago

This issue is to discuss the extension structure.

With the first proposal I try to keep in consideration all the ideas in #1, the metadata in other standards as suggested in #3 and some issues reported in nwb-schema: (1)[https://github.com/NeurodataWithoutBorders/nwb-schema/issues/538] (2)[https://github.com/NeurodataWithoutBorders/nwb-schema/issues/517] (3)[https://github.com/NeurodataWithoutBorders/nwb-schema/issues/431] (4)[https://github.com/NeurodataWithoutBorders/nwb-schema/issues/406] (5)[https://github.com/NeurodataWithoutBorders/nwb-schema/issues/343] (6)[https://github.com/NeurodataWithoutBorders/pynwb/issues/1736]

NB:

  1. The schemes are still WIP: I will add more details on a written form later
  2. Reference arrows have been left out of the schema for a more compact visualization (they will be included in the final version)
  3. OptogeneticSeries link to Microscope and LightSource represented in the schema for the MicroscopySeries

Ophys Acquisition

ndx-microscopy

Optogenetics

it also take into account the extensions for patterned photostimulation: https://github.com/catalystneuro/ndx-holographic-stimulation, https://github.com/histedlab/ndx-photostim ndx-ogen

h-mayorquin commented 8 months ago

I have been meaning to start this discussion for the last month but it was not until last week that I met wit the members of the Cladinin group.

Recently, the Cladinin lab came with a proposal for doing imaging registration to standard atlases called Bifrost. While the details of how to do registration are complex I think that we can move the NWB format further by having at least the ability to express the most basic of spatial coordinates transformations: the affine transform.

In fmri neuroimaging the transformation is used to express how the data in voxel space (i.e. the data as it is) could be transformed into some real world coordinates. A good summary can be found in the following tutorial but the idea is simple, every image / video has a matrix of the following kind:

image

That is used to express the rotations and translations that would be necessary to express the data as it is (the voxel or pixel space) into lab coordinates.

Right now, I think that the current elements of the ImagingPlane map to the following elements of the Affine matrix formulation for expressing real world coordinates:

As you see we are missing the ability to express rotations that usually affect the non-diagonal terms of the matrix (and also the diagonal). That is, the having the concept of the affine matrix will generalize two our fields and expand the expressive power of proposed ImagingSpace concept. I think this is a useful concept to have.

The limitation of this is that it only allows us to express affine transformations. Non-affine transformations are a different type of beast. One representation of non-affine transformations are displacement fields where basically every voxel has a vector assigned to them indicating how they should be changed. In nifti they are represented with the following structure:

X x Y x Z x 1 x 3

We could include a general coordinate_transformation_matrix as a field that can be either the affine matrix above or a displacmeent field as used by nifti as a more generalized idea but my impression while talking with the folks from the Cladinin lab was that they were less confident that "this is the way to represent non-affine transformations". Maybe this generalization is not worht doing at the moment.

h-mayorquin commented 8 months ago

Another concept that maybe needs refinment is the use and scope of the reference frame field. Currently we have reference_frame that expresses as a description of what the origin is (e.g. Bregma). For example, here is the origin as Bregma in the Paxinos atlas.

image

Which is what I think inspired the field at the time. However, in neuro imaging it seems more common to agree that the axis are given anatomically (right-left, posterior-anterior, inferior-superior) and then there are different conventions for the signs of the given axes. Check this image:

image

That is, if we are gona use the reference_frame field or something else as a description of the what do the axes mean maybe we should update the string to contain a set of working examples as the current one is not very descriptive in my opinion.

ehennestad commented 5 months ago

With regards to the light source, what are the intended values to record under power and intensity? Is it the power/intensity of the light source itself, or the power/intensity measured under the objective, which in most cases are the most meaningful to record? I assume a description of this would be added in the doc, but maybe it could be made more explicit by for example consider names like for example source_power or target_power/ power_at_target

h-mayorquin commented 3 months ago

For the topic of coordinates a good discussion can be found here:

https://github.com/ome/ngff/pull/138