bids-standard / bids-specification

Brain Imaging Data Structure (BIDS) Specification
https://bids-specification.readthedocs.io/
Creative Commons Attribution 4.0 International
277 stars 162 forks source link

Revisiting space definition in BIDS (reference frames, coordsys) with respect to Motion extension #1488

Closed sjeung closed 1 year ago

sjeung commented 1 year ago

Dear everyone in BIDS community,

In the past couple of days, I have been in corresponding with @robertoostenveld to revisit some points regarding space definition, within the broader context of entire BIDS, not just motion tracking data. He had very helpful insights especially in terms of some pitfalls to beware of when contributing an extension to BIDS. Some of these points were brought up in the past, and in the very first drafts of Motion-BIDS we were trying to be more proactive in defining things such as nested reference frames, but later on decided to refrain from being specific. Still we are hoping for future developments to gradually resolve these problems and here I would like to put together what gaps we have identified in the process, reasonings behind why some ideas were taken or rejected, and make sure we can generally agree on how we deal with this as of now.

Please chime in to let us know what you think and especially whether you can agree on “suggestions” at the end of this post.

Current specification

Conflicts & issues

  1. BIDS extension for Motion data is not making reference to already defined concepts within BIDS. Specifically, following definitions can be relevant : a. Axis labels such as ALS (anterior-left-superior) used in anatomical locations, not FLU (forward-left-up) in _motion.json
    b. Anatomical space versus non-anatomical device (or sensor) space, the distinction has already been made

  2. Within the same tracking system, motion channels can have distinct reference frames defined in different spaces, and currently there is no way to communicate this in neither _motion.json nor _channels.tsv a. Anatomical space (origin of the space itself can move): Anatomical space can be hierarchical. For instance, joint angles can be expressed around the axis tied to the forearm. This value would not change, no matter which way the participant is facing in the global space. Or when the head moves, and the point between ears also move with the head, then the electrode locations defined with respect to the head reference frame will remain static throughout. b. Sensor (Device) space : Here the origin can be a single fixed point within the room or multiple fixed points (IMUs with respective positions at the onset of recording)

Until the release of next BIDS version, the goal is not to immediately solve these issues, but to make sure the current specification will not be hindrance to future developments. Considering backwards compatibility, we should check what may be required for “re-attaching the head back to the body” within BIDS. Now, it is difficult to know ahead of time, what can cause compatibility issues with future solutions (for example anatomical library of sensor positions), without thinking about how these solutions will look like and how they will be plugged into BIDS. Ideally, when the space definitions for head (electrodes, MR images…) and body parts are connected seamlessly, we should be able to map the static sensor/electrode locations to the global space, following the movement of the participant (regardless of whether this is a useful thing to do at the moment).

Suggestions from my side

  1. Motion-BIDS does not map the axes to the anatomical reference frame yet.

    a. Anatomical reference frame is what is referred to in EEG, MEG .etc. At the moment, description of MOTION coordinate system in anatomical terms (ALS, origin somewhere…) is not meaningful, because the definition of those “origins” or “fiducials” are missing. If this were to be used, a significant amount of data wrangling (inverse kinematics, etc.) should be accompanied.

    b. It is true that when we visualize or try to interpret motion data, there are at least three scenarios, as Robert pointed out. Sometimes the origins of each channel will all be the same axes (floor center). Other times, like in IMU, the origins will be shifted from each other. Or the origin itself will be moving throughout recording, when the finger is treated as the “child” object of hand within the hierarchy and that “local” transform is recorded with respect to the center of the hand. Unless we know how those differing origins or moving object ID can be communicated, it is not meaningful to make this distinction in the current spec.

    c. _channels.tsv for motion data has a column “tracked points”, which now contains user-defined labels of whatever is being tracked. This can in the future refer to an external library and using this information, one can infer where the anatomical origin used in neuroimaging is (e.g., we find out where the center point between ears is, with respect to moving body parts, through inverse kinematics), then really start talking about anatomical coordinate systems.

  2. We will update the description for field “SpatialAxes” in _motion.json, so that it is only given, when the axes (in non-anatomical device space) have a physical meaning (say, when one can really answer to “What’s up?” in the shared data - down is where the floor is in a normal physical room and the direction of the gravitational pull). That way, it will not misguide users to think that IMU axes should have some physical meaning.

    a. Admittedly, even for room-scale recordings within global space with single origin, definition of left-right and forward-backward is arbitrary. But sharing a string like FRU it will enable users to reasonably visualize the data so that we can make out the shape of a person standing or walking, for instance, without mixing up vertical and horizontal plane or flipping sides relative to each other. When this coherence conveniently exists in the data, we should enable people to share that.

    b. In case within the same recording from the same device, some channel subgroup has a differing spatial axes (like joint angle channels using anatomical reference frame), this is to be separated as an independent _motion.tsv file with a separate _motion.json.

    c. In the future, more fine-grained information about arbitrary spatial axes (where up-down does not mean anything because the axis is the forearm or the axis was tilted with the IMU) can be stored in the _motion.json file (as a set of vectors, or some set of points defined on a 3D template of human body model…or even as a separately stored image file - establishment of this convention is what I think goes beyond the scope of BIDS).

  3. We will make these points more explicit in the documentation of the motion spec, coordsys page, as well as the manuscript we are working on for Motion-BIDS.

It became a very long post so thank you very much for reading this through. I think the difficulty comes from the fact that motion tracking is mostly not done in anatomical space. It might be a very trivial statement but when anatomy is about the static, unchanging property of what is inherently attached to what, motion tracking (at least in use cases I am familiar with) is simply way more flexible and relational, making it difficult to interpret in terms of anterior-posterior (what is anterior to an animal or human when their looking direction is orthogonal to walking direction?). All of this is why I imagine motion data in literature is not described along anatomical axes. So when asking “which body part was tracked? (= how was the sensor placed?)” it will be useful to use anatomical planes but when asking “how did this body part move?” it’s often enough to have a generic description of spatial axes.

I will wait for your feedback, tagging @JuliusWelzel @helenacockx @sappelhoff @Remi-Gau @guiomar @smakeig @neuromechanist among many others.

arnodelorme commented 1 year ago

This is a great effort. I think one important part that is not currently addressed is the synchronization with other modalities. Right now, I think the consensus is

  1. Use the scan.tsv file "acq_time" variable to determine the relative time in both data files (for example, motion and EEG) and align the first sample.
  2. Use common events in the event.tsv files of both modalities to further align the recording and take into account small differences in sampling rates between the different modalities. I believe these events should have a specific type ("sync"?), or maybe there could be a separate column with boolean values indicating if specific events can be used for synching.

I know this is slightly outside the scope of motion capture, but still important.

smakeig commented 1 year ago

But Arno, how are modality-shared 'sync' events to be recorded? For example, in studies recording EEG and eye tracking, no such common events are logged. We know that our own and the 100s of other studies now using LabStreamingLayer avoid the need for such a hardware-enabled solution.

On Tue, May 9, 2023 at 7:08 PM Arnaud Delorme @.***> wrote:

This is a great effort. I think one important part that is not currently addressed is the synchronization with other modalities. Right now, I think the consensus is

  1. Use the scan.tsv file "acq_time" variable to determine the relative time in both data files (for example, motion and EEG) and align the first sample.
  2. Use common events in the event.tsv files of both modalities to further align the recording and take into account small differences in sampling rates between the different modalities. I believe these events should have a specific type ("sync"?), or maybe there could be a separate column with boolean values indicating if specific events can be used for synching.

I know this is slightly outside the scope of motion capture, but still important.

— Reply to this email directly, view it on GitHub https://urldefense.com/v3/__https://github.com/bids-standard/bids-specification/issues/1488*issuecomment-1541000299__;Iw!!Mih3wA!D9vvHVfOhobujz0JRoGyHUOwtOMh2pBm5A0fYerGJYyI5_aZ1hR07Qrbhvq605DCZWqCzuO1Wg0BO37L5rXnJLZO$, or unsubscribe https://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AKN2SFTFEZKK3R4JGZL3LCLXFLE45ANCNFSM6AAAAAAX3LD77I__;!!Mih3wA!D9vvHVfOhobujz0JRoGyHUOwtOMh2pBm5A0fYerGJYyI5_aZ1hR07Qrbhvq605DCZWqCzuO1Wg0BO37L5lG9fR7Z$ . You are receiving this because you were mentioned.Message ID: @.***>

-- Scott Makeig, Research Scientist and Director, Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San Diego, La Jolla CA 92093-0559, http://sccn.ucsd.edu/~scott

arnodelorme commented 1 year ago

Not everybody uses LSL. It is common to have multiple recording apparatus synched by TTL pulses. If the data is recorded with LSL, then such events can be created from the data streams. Computer clock frequencies are not exact. For example, when you want 300 Hz sampling frequency, the signal might be acquired at 300.01 Hz. Seems like nothing, but 0.01 Hz is 120 ms after one hour (0.01*3600 = 36 samples which represents 36/300=0.12 seconds), which is very large in terms of EEG synchronization standard and cannot be ignored.

robertoostenveld commented 1 year ago

In a recent experiment @helenacockx and I used a combination of methods in an experiment where the task (LSL), NIRS (LSL), motion capture (TTL) and video+audio with camcorders (inaudible beep in one of the stereo channels) needed to be synchronized. We used the scans.tsv approach that Arno listed as "1" to specify the synchronization between modalities (note that video is not shared but video annotations of the behavior are, together with the task details in the events.tsv). Furthermore, we trimmed the start of the raw motion and NIRS datasets such that they align. The result is this scans.tsv.

Screenshot 2023-05-10 at 12 36 58

There are 4 experimental blocks or runs, each with one raw nirs and one synchronous motion recording. We are still discussing whether the motion needs to be split into two files, to accommodate the different reference frames in which the motion data is expressed as discussed in the 1st post. Let me know if anything is unclear w.r.t. synchronization, but preferably through another channel, as it is not specific to motion data.

smakeig commented 1 year ago

Arno - You write:

... synched by TTL pulses. If the data is recorded with LSL, then such events can be created from the data streams. Actually, LSL does create such 'virtual events' - e.g., one 'time_stamp event' for every sample in each data stream. ... Naturally, including all these 'virtual events' in the 'events.tsv' files for each modality is not optimal; the time_samp latencies should be far more simply and effectively maintained as a channel of type 'LSLtime' in the data matrix, or located within the 'run/scan' folder for each recording modality. If, for some modalities, there are occasional TTL pulse events as well, their latencies might be conveniently noted in the events files.

On Wed, May 10, 2023 at 12:15 AM Arnaud Delorme @.***> wrote:

It is common to have multiple recording apparatus synched by TTL pulses. If the data is recorded with LSL, then such events can be created from the data streams. Computer clock frequencies are not exact. For example, when you want 300 Hz sampling frequency, the signal might be acquired at 300.01 Hz. Seems like nothing, but 0.01 Hz is 120 ms after one hour (0.01*3600 = 36 samples which represents 36/300=0.12 seconds), which is very large in terms of EEG synchronization standard and cannot be ignored.

— Reply to this email directly, view it on GitHub https://urldefense.com/v3/__https://github.com/bids-standard/bids-specification/issues/1488*issuecomment-1541329714__;Iw!!Mih3wA!EAM_n44UguruHcBGxMcz4vyM6Jawfx_yeH_AKydZSEIzfHvCGX9Sr0JwvZbMOEa1MExTx4P0UYD6gzdpWwwgiFq1$, or unsubscribe https://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AKN2SFWZJFE4Q5KAZWTWGVTXFMI7ZANCNFSM6AAAAAAX3LD77I__;!!Mih3wA!EAM_n44UguruHcBGxMcz4vyM6Jawfx_yeH_AKydZSEIzfHvCGX9Sr0JwvZbMOEa1MExTx4P0UYD6gzdpW9aa9_m3$ . You are receiving this because you were mentioned.Message ID: @.***>

-- Scott Makeig, Research Scientist and Director, Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San Diego, La Jolla CA 92093-0559, http://sccn.ucsd.edu/~scott

arnodelorme commented 1 year ago

Thanks Robert. For the scan.tsv files, it seems that it is also possible to specify milliseconds. It was not done in your example, you used seconds, but milliseconds may be used right? Scott, yes an additional channel for synchronization is would work as well.

sjeung commented 1 year ago

@smakeig @arnodelorme Yeah in response to related discussions in the past we included Latency channel here, which can be especially useful for LSL recorded data sets.

sjeung commented 1 year ago

So earlier today @helenacockx @robertoostenveld @JuliusWelzel and I had another meeting to further discuss the space related standards, how the current spec can be already limiting future developments.

The example data set (XSens) in Robert's comment above actually illustrates a scenario where having one "SpatialAxes" (as well as RotationOrder and RotationRule) defined per .json can be inappropriate. In the 100+ channels in one tracksys (grouped together per current definition of tracksys which is not taking reference frames into account), some tens of them are joint angles, each of which has a distinct reference frame, although the rest of the channels have one global reference frame. It is not clear in the current spec whether the joint angles should be then presented as a single separate tracksys (tracksys-XsensJoints) or each becomes a tracksys (tracksys-XsensJoint1, tracksys-XsensJoint2...).

A suggested solution is

The motivation to do this is to have some structure in place for the potential future developments to be plugged into, instead of having a half-informative standardized field (i.e. the current _motion.json).

Specification itself, example data set, and bids-validator will then have to be updated accordingly. @sappelhoff @Remi-Gau @effigies Do you have any thoughts on this suggestion? If you think this is feasible me & julius will go ahead, implement the related work packages and ping you back.

VisLab commented 1 year ago

I support taking a conservative approach and not prematurely committing to a format and specification that locks future developments out. The HED Working Group struggled with specifying coordinate systems and frames of reference in HED tags and eventually concluded that this is a very hard problem that needed to be tackled more globally. Thanks for your work on this.

JuliusWelzel commented 1 year ago

It was not done in your example, you used seconds, but milliseconds may be used right?

Hey @arnodelorme, yes, a channel of type LATENCY does allow sub-second precision.

robertoostenveld commented 1 year ago

It was not done in your example, you used seconds, but milliseconds may be used right?

And in the scans.tsv file you can also use milliseconds, see here.

sappelhoff commented 1 year ago

@sappelhoff @Remi-Gau @effigies Do you have any thoughts on this suggestion? If you think this is feasible me & julius will go ahead, implement the related work packages and ping you back.

I am happy to see that practical dataset curation has revealed some issues and that we get a chance to solve them before a release of the spec. +1 to go ahead with the changes in a PR, which can then be reviewed and discussed.

users can optionally use something like ref-global_coordsys.json as an (yet) unofficial solution and these files will of into .bidsignore until this is properly specified in the next version.

I am not so happy about any solutions that involve .bidsignore. Why can't the definitions go into the respective channels.json files if they refer to column entries in channels.tsv?

helenacockx commented 1 year ago

There is currently no such thing as channels.json, or do you mean to create a new file channels.json? However, I don't dislike your idea to further specify the coordinate systems in a channels.json file. We thought that *_coordsystem.json would be more logical given that this is also an existing file file type for eeg or nirs for example. @sjeung @JuliusWelzel @robertoostenveld, what do you think of this suggestion?

sjeung commented 1 year ago

I do like the solution suggested by Stefan, completely forgot about channels.json. But as Helena points out I somehow can't find the official documentation about channels.json specifically, although I remember it being brought up in some discussions. I think it is inferred from this bit on general principles for tsv files.

robertoostenveld commented 1 year ago

Agree about channels.json being a good solution! It allows a (still relatively informal) documentation of the reference frame within the current specification.

sappelhoff commented 1 year ago

I think it is inferred from this bit on general principles for tsv files.

yes, that is what I was referring to:

Tabular files MAY be optionally accompanied by a simple data dictionary in the form of a JSON object within a JSON file. The JSON files containing the data dictionaries MUST have the same name as their corresponding tabular files but with .json extensions. If a data dictionary is provided, it MAY contain one or more fields describing the columns found in the TSV file (in addition to any other metadata one wishes to include that describe the file as a whole). Note that if a field name included in the data dictionary matches a column name in the TSV file, then that field MUST contain a description of the corresponding column, using an object containing the following fields

robertoostenveld commented 1 year ago

Helena just proposed in her draft dataset

{
    "reference_frame": {
        "LongName": "reference frame",
        "Description": "reference frame in which the channels is represented.",
        "Levels": {
            "global": {
                "SpatialAxes": "ALS",
                "RotationOrder": "ZXY",
                "RotationRule": "right-hand"
            },
            "local": {
                "CoordinateSystemDescription": "joint angles are described following the ISB-based coordinate system, with a local reference frame attached to the body segment. See Wu and Cavanagh (1995), Wu et al. (2002), Wu et al. (2005), and the Xsens MVN Awinda user manual for more information."
            }
        }
    }
}

I suggest the following

{
    "reference_frame": {
        "LongName": "reference frame",
        "Description": "reference frame in which the channel is represented.",
        "Levels": {
            "global": "ALS coordinate system relative to the initial calibration position of the participant at the start of the run. The rotation order is ZXY and is right-handed.",
            "local": "Joint angles are described following the ISB-based coordinate system, with a local reference frame attached to the body segment. See Wu and Cavanagh (1995), Wu et al. (2002), Wu et al. (2005), and the Xsens MVN Awinda user manual for more information."
        }
    }
}

which makes the two more equal/balanced. Rather than using an "Object of objects" (aka nested objects) for reference_frame.Levels, it is simply an "Object of strings" as per specification.

I also added the initial calibration position (I hope I have that correct) and removed an "s" from "channels" in reference_frame.Description.

JuliusWelzel commented 1 year ago

@robertoostenveld This solution would be more informal when data is shared and relevant information is not standardized. The previously introduced fields in *_motion.json regarding a reference_frame would be dropped. This would lead to a lower level of standardization, even for the cases where the previously introduced fields work. I would suggest moving fields regarding reference frames (SpatialAxes, RotationOrder, RotationRule) from the *_motion.json to the *_channels.json and adding the option n/a if something is not specified like elsewhere in BIDS. This would be my suggestion for Helenas dataset:

{
    "reference_frame": {
        "LongName": "reference frame",
        "Description": "reference frame in which the channels is represented.",
        "Levels": {
            "global": {
                "CoordinateSystemDescription": "n/a",
                "SpatialAxes": "ALS",
                "RotationOrder": "ZXY",
                "RotationRule": "right-hand"
            },
            "local": {
                "CoordinateSystemDescription": "joint angles are described following the ISB-based coordinate system, with a local reference frame attached to the body segment. See Wu and Cavanagh (1995), Wu et al. (2002), Wu et al. (2005), and the Xsens MVN Awinda user manual for more information.",
                "SpatialAxes": "n/a",
                "RotationOrder": "n/a",
                "RotationRule": "n/a"
            }
        }
    }
}

Overall I like the solutions using *_channels.json here.

helenacockx commented 1 year ago

For me both solutions are fine. I could still add this to the CoordinateSystemDescription of the global reference frame: "Global coordinate system relative to the initial calibration position of the participant at the start of the session."

robertoostenveld commented 1 year ago

@sappelhoff do you know whether Julius' suggestion above would pass the validator? It uses a nested objects under Level rather than than an object of strings.

JuliusWelzel commented 1 year ago

@sappelhoff do you know whether Julius' suggestion above would pass the validator? It uses a nested objects under Level rather than than an object of strings.

I know @sjeung had a solution for nested .json files in the validator at some point of Motion-BIDS

sappelhoff commented 1 year ago

@sappelhoff do you know whether Julius' https://github.com/bids-standard/bids-specification/issues/1488#issuecomment-1545409815 above would pass the validator? It uses a nested objects under Level rather than than an object of strings.

we would have to make adjustments to the JSON schema for this particular case, but it'd be doable. And for the new schema-based validator it shouldn't be a problem either (after the structure is implemented).

helenacockx commented 1 year ago

That would be wonderful @sappelhoff!

Would we prefer to have the channels.json on the top level, or in each motion folder? I can imagine that a channels.json on the top level can lead to confusion as it does not apply to the channels.tsv for the other folders (e.g., nirs, eeg).

sjeung commented 1 year ago

@robertoostenveld This solution would be more informal when data is shared and relevant information is not standardized. The previously introduced fields in *_motion.json regarding a reference_frame would be dropped. This would lead to a lower level of standardization, even for the cases where the previously introduced fields work. I would suggest moving fields regarding reference frames (SpatialAxes, RotationOrder, RotationRule) from the *_motion.json to the *_channels.json and adding the option n/a if something is not specified like elsewhere in BIDS. This would be my suggestion for Helenas dataset:

{
  "reference_frame": {
      "LongName": "reference frame",
      "Description": "reference frame in which the channels is represented.",
      "Levels": {
          "global": {
              "CoordinateSystemDescription": "n/a",
              "SpatialAxes": "ALS",
              "RotationOrder": "ZXY",
              "RotationRule": "right-hand"
          },
          "local": {
              "CoordinateSystemDescription": "joint angles are described following the ISB-based coordinate system, with a local reference frame attached to the body segment. See Wu and Cavanagh (1995), Wu et al. (2002), Wu et al. (2005), and the Xsens MVN Awinda user manual for more information.",
              "SpatialAxes": "n/a",
              "RotationOrder": "n/a",
              "RotationRule": "n/a"
          }
      }
  }
}

Overall I like the solutions using *_channels.json here.

As I am personally in favor of standardizing to some degree, this seems like a good solution, with a clearer definition of applicable situations. But if we do some standardization here (to make the validator work and to be useful for applicable cases) again I have to revive the discussion around ALS versus FLU. From what I understood this data set has axes definition bound to external space not body coordinate system (except joint angles). So why not FLU? We can make the FLU thing a "recommended" standard now and add in ALS system for situations when axes have anatomical meaning (but users can still use it now if we allow freeform specification).

And yes, nested json for validator had been implemented in the past and it is doable.

sjeung commented 1 year ago

Regarding requirement level, I would only leave in the three fields as "recommended" in the standard (SpatialAxes, RotationOrder, RotationRule) because only these can have entries that are machine readable, and allow people to add optional additional fields such as CoordinateSystemDescription, and leave its exact naming for future updataes.

smakeig commented 1 year ago

Sein - Your proposal seems quite workable to me. Now, re nested reference frames (head < torso < room; screen<room; finger<hand<torso<room; etc.) can there at least be a standard for specifying how connecting frames are linked?

For example, the (2D) screen is centered in the room at (x,y,z) and angled outward wrt room coordinates (mu,theta), producing a json description as "screen_frame" { ...}

Scott

On Fri, May 12, 2023 at 8:19 AM Sein Jeung @.***> wrote:

@robertoostenveld https://urldefense.com/v3/__https://github.com/robertoostenveld__;!!Mih3wA!Cz-Y_WV2ZpUCAoaJC7XBK9L7Cho5ujvnsckTxFKj3cyGEvYV4oPavtGSk3HFWr0Zu0AZCbDGdevfvVzJ6y9GaSCu$ This solution would be more informal when data is shared and relevant information is not standardized. The previously introduced fields https://urldefense.com/v3/__https://bids-specification.readthedocs.io/en/latest/modality-specific-files/motion.html*motion-specific-fields__;Iw!!Mih3wA!Cz-Y_WV2ZpUCAoaJC7XBK9L7Cho5ujvnsckTxFKj3cyGEvYV4oPavtGSk3HFWr0Zu0AZCbDGdevfvVzJ65Vp_uGc$ in _motion.json regarding a reference_frame would be dropped. This would lead to a lower level of standardization, even for the cases where the previously introduced fields work. I would suggest moving fields regarding reference frames (SpatialAxes, RotationOrder, RotationRule) from the _motion.json to the _channels.json and adding the option n/a if something is not specified like elsewhere in BIDS <https://urldefense.com/v3/__https://bids-specification.readthedocs.io/en/stable/appendices/coordinate-systems.htmlieeg-specific-coordinate-systems__;Iw!!Mih3wA!Cz-Y_WV2ZpUCAoaJC7XBK9L7Cho5ujvnsckTxFKj3cyGEvYV4oPavtGSk3HFWr0Zu0AZCbDGdevfvVzJ67VRlPDs$>. This would be my suggestion for Helenas dataset:

{ "reference_frame": { "LongName": "reference frame", "Description": "reference frame in which the channels is represented.", "Levels": { "global": { "CoordinateSystemDescription": "n/a", "SpatialAxes": "ALS", "RotationOrder": "ZXY", "RotationRule": "right-hand" }, "local": { "CoordinateSystemDescription": "joint angles are described following the ISB-based coordinate system, with a local reference frame attached to the body segment. See Wu and Cavanagh (1995), Wu et al. (2002), Wu et al. (2005), and the Xsens MVN Awinda user manual for more information.", "SpatialAxes": "n/a", "RotationOrder": "n/a", "RotationRule": "n/a" } } } }

Overall I like the solutions using *_channels.json here.

As I am personally in favor of standardizing to some degree, this seems like a good solution, with a clearer definition of applicable situations. But if we do some standardization here (to make the validator work and to be useful for applicable cases) again I have to revive the discussion around ALS versus FLU. From what I understood this data set has axes definition bound to external space not body coordinate system. So why not FLU? We can make the FLU thing a "recommended" standard now and add in ALS system for situations when axes have anatomical meaning (but users can still use it now if we allow freeform specification).

And yes, nested json for validator had been implemented in the past and it is doable.

— Reply to this email directly, view it on GitHub https://urldefense.com/v3/__https://github.com/bids-standard/bids-specification/issues/1488*issuecomment-1545655512__;Iw!!Mih3wA!Cz-Y_WV2ZpUCAoaJC7XBK9L7Cho5ujvnsckTxFKj3cyGEvYV4oPavtGSk3HFWr0Zu0AZCbDGdevfvVzJ68vuQVkN$, or unsubscribe https://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AKN2SFX3VOONT2WWGJHS633XFYTD7ANCNFSM6AAAAAAX3LD77I__;!!Mih3wA!Cz-Y_WV2ZpUCAoaJC7XBK9L7Cho5ujvnsckTxFKj3cyGEvYV4oPavtGSk3HFWr0Zu0AZCbDGdevfvVzJ6-lrI2Cn$ . You are receiving this because you were mentioned.Message ID: @.***>

-- Scott Makeig, Research Scientist and Director, Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San Diego, La Jolla CA 92093-0559, http://sccn.ucsd.edu/~scott

sjeung commented 1 year ago

Sein - Your proposal seems quite workable to me. Now, re nested reference frames (head < torso < room; screen<room; finger<hand<torso<room; etc.) can there at least be a standard for specifying how connecting frames are linked? For example, the (2D) screen is centered in the room at (x,y,z) and angled outward wrt room coordinates (mu,theta), producing a json description as "screen_frame" { ...} Scott On Fri, May 12, 2023 at 8:19 AM Sein Jeung @.> wrote: @robertoostenveld <https://urldefense.com/v3/https://github.com/robertoostenveld;!!Mih3wA!Cz-Y_WV2ZpUCAoaJC7XBK9L7Cho5ujvnsckTxFKj3cyGEvYV4oPavtGSk3HFWr0Zu0AZCbDGdevfvVzJ6y9GaSCu$> This solution would be more informal when data is shared and relevant information is not standardized. The previously introduced fields <[https://urldefense.com/v3/__https://bids-specification.readthedocs.io/en/latest/modality-specific-files/motion.htmlmotion-specific-fields;Iw!!Mih3wA!Cz-Y_WV2ZpUCAoaJC7XBK9L7Cho5ujvnsckTxFKj3cyGEvYV4oPavtGSk3HFWr0Zu0AZCbDGdevfvVzJ65Vp_uGc$](https://urldefense.com/v3/https://bids-specification.readthedocs.io/en/latest/modality-specific-files/motion.html*motion-specific-fields__;Iw!!Mih3wA!Cz-Y_WV2ZpUCAoaJC7XBK9L7Cho5ujvnsckTxFKj3cyGEvYV4oPavtGSk3HFWr0Zu0AZCbDGdevfvVzJ65Vp_uGc%24)> in _motion.json regarding a reference_frame would be dropped. This would lead to a lower level of standardization, even for the cases where the previously introduced fields work. I would suggest moving fields regarding reference frames (SpatialAxes, RotationOrder, RotationRule) from the _motion.json to the _channels.json and adding the option n/a if something is not specified like elsewhere in BIDS <[https://urldefense.com/v3/__https://bids-specification.readthedocs.io/en/stable/appendices/coordinate-systems.htmlieeg-specific-coordinate-systems;Iw!!Mih3wA!Cz-Y_WV2ZpUCAoaJC7XBK9L7Cho5ujvnsckTxFKj3cyGEvYV4oPavtGSk3HFWr0Zu0AZCbDGdevfvVzJ67VRlPDs$](https://urldefense.com/v3/__https://bids-specification.readthedocs.io/en/stable/appendices/coordinate-systems.html*ieeg-specific-coordinate-systems__;Iw!!Mih3wA!Cz-Y_WV2ZpUCAoaJC7XBK9L7Cho5ujvnsckTxFKj3cyGEvYV4oPavtGSk3HFWr0Zu0AZCbDGdevfvVzJ67VRlPDs%24)>. This would be my suggestion for Helenas dataset: { "reference_frame": { "LongName": "reference frame", "Description": "reference frame in which the channels is represented.", "Levels": { "global": { "CoordinateSystemDescription": "n/a", "SpatialAxes": "ALS", "RotationOrder": "ZXY", "RotationRule": "right-hand" }, "local": { "CoordinateSystemDescription": "joint angles are described following the ISB-based coordinate system, with a local reference frame attached to the body segment. See Wu and Cavanagh (1995), Wu et al. (2002), Wu et al. (2005), and the Xsens MVN Awinda user manual for more information.", "SpatialAxes": "n/a", "RotationOrder": "n/a", "RotationRule": "n/a" } } } } Overall I like the solutions using *_channels.json here. As I am personally in favor of standardizing to some degree, this seems like a good solution, with a clearer definition of applicable situations. But if we do some standardization here (to make the validator work and to be useful for applicable cases) again I have to revive the discussion around ALS versus FLU. From what I understood this data set has axes definition bound to external space not body coordinate system. So why not FLU? We can make the FLU thing a "recommended" standard now and add in ALS system for situations when axes have anatomical meaning (but users can still use it now if we allow freeform specification). And yes, nested json for validator had been implemented in the past and it is doable. — Reply to this email directly, view it on GitHub <[https://urldefense.com/v3/https://github.com/bids-standard/bids-specification/issues/1488*issuecomment-1545655512__;Iw!!Mih3wA!Cz-Y_WV2ZpUCAoaJC7XBK9L7Cho5ujvnsckTxFKj3cyGEvYV4oPavtGSk3HFWr0Zu0AZCbDGdevfvVzJ68vuQVkN$](https://urldefense.com/v3/__https://github.com/bids-standard/bids-specification/issues/1488*issuecomment-1545655512__;Iw!!Mih3wA!Cz-Y_WV2ZpUCAoaJC7XBK9L7Cho5ujvnsckTxFKj3cyGEvYV4oPavtGSk3HFWr0Zu0AZCbDGdevfvVzJ68vuQVkN%24)>, or unsubscribe <https://urldefense.com/v3/https://github.com/notifications/unsubscribe-auth/AKN2SFX3VOONT2WWGJHS633XFYTD7ANCNFSM6AAAAAAX3LD77I;!!Mih3wA!Cz-Y_WV2ZpUCAoaJC7XBK9L7Cho5ujvnsckTxFKj3cyGEvYV4oPavtGSk3HFWr0Zu0AZCbDGdevfvVzJ6-lrI2Cn$> . You are receiving this because you were mentioned.Message ID: *@.> -- Scott Makeig, Research Scientist and Director, Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San Diego, La Jolla CA 92093-0559, http://sccn.ucsd.edu/~scott

Hi Scott, for sure this is on the list of aspects we want to consider in the next version. But as Kay also mentioned we should probably take more time thinking about this and hopefully collect more community input

helenacockx commented 1 year ago

As I am personally in favor of standardizing to some degree, this seems like a good solution, with a clearer definition of applicable situations. But if we do some standardization here (to make the validator work and to be useful for applicable cases) again I have to revive the discussion around ALS versus FLU. From what I understood this data set has axes definition bound to external space not body coordinate system (except joint angles). So why not FLU? We can make the FLU thing a "recommended" standard now and add in ALS system for situations when axes have anatomical meaning (but users can still use it now if we allow freeform specification).

I do not really see how using FLU instead of ALS would solve the problem of not having a meaningful description of up/down, forward/backward. Anterior is just the Latin translation for 'to the front', and superior is just the Latin translation for 'up' or 'above'. I neither see a big problem for misinterpretation of the axis definition. For instance, if the global reference frame of my dataset is described as Global coordinate system relative to the initial calibration position of the participant at the start of the session., people will understand that a positive coordinate on the x-axis is anterior to the calibration position, even if the tracked person is looking in the opposite direction.

sjeung commented 1 year ago

As I am personally in favor of standardizing to some degree, this seems like a good solution, with a clearer definition of applicable situations. But if we do some standardization here (to make the validator work and to be useful for applicable cases) again I have to revive the discussion around ALS versus FLU. From what I understood this data set has axes definition bound to external space not body coordinate system (except joint angles). So why not FLU? We can make the FLU thing a "recommended" standard now and add in ALS system for situations when axes have anatomical meaning (but users can still use it now if we allow freeform specification).

I do not really see how using FLU instead of ALS would solve the problem of not having a meaningful description of up/down, forward/backward. Anterior is just the Latin translation for 'to the front', and superior is just the Latin translation for 'up' or 'above'. I neither see a big problem for misinterpretation of the axis definition. For instance, if the global reference frame of my dataset is described as Global coordinate system relative to the initial calibration position of the participant at the start of the session., people will understand that a positive coordinate on the x-axis is anterior to the calibration position, even if the tracked person is looking in the opposite direction.

Sure, when it comes to the exact meaning it makes no difference. But I wanted to consider the conventional usage of labels for those axes (as detailed in the first post of this thread, and also the point pro ALS being about anatomical conventions in the first place - which are mostly about static anatomical locations) and make the association less restrictive. I can imagine how ALS planes will be more appropriate if there is a data set where everything is referenced to the individual's torso but other than that description of motion doesn't have to be body-bound. This is of course even more so in spatial cognition studies or pointing tasks that I am used to which are not full-on biomechanics. In any case, I think the standard related decisions don't have to be made in a hurry and thanks for testing the ideas around with your data set so far.

sjeung commented 1 year ago

@helenacockx In any case, even if you finalize your data set now with ALS string it will mostly likely be still compatible with the spec because we don't intend to be exclusive about how it should be describe as of now, so the FRU style description will only be recommendation, provided it gets implemented that way.

JuliusWelzel commented 1 year ago

Hello everyone,

I am currently reworking the specifications to match the discussion above. I noticed, that MEG, EEG, and iEEG have their "own" definition of coordinate systems within BIDS. So far there is also a section for Image-based Coordinate Systems. To move forward, I would like to use on one of two solutions:

  1. Write a section for motion specific coordinate systems and introduce a more general use of the term space. space so far seems to be related only to electrode positions, but could be used to label a specific coordsystem.json files for a motion channel.
  2. A new entity ref-<label> is introduced in the coordsystem.json filename, to match a motion channel to a specified coordsystem.json file. (This is the current solution in @helenacockx dataset). The

Let me know what you think.

Best, Julius

sappelhoff commented 1 year ago

I am in favor of solution 1: making space more general so that it can apply to Motion as well, and introduce a motion specific section next to the MEG, EEG, iEEG and image-based coord systems.

sjeung commented 1 year ago

Hi everyone, after more discussion and thinking we are converging to the approach of using *_channels.json

Concretely

Will get to opening all related PRs unless there is any foreseeable problem with this approach.

sappelhoff commented 1 year ago

@sjeung @JuliusWelzel could you please summarize this thread and tell us what's still needed from your side, now that this one has been merged?

JuliusWelzel commented 1 year ago

Hello @sappelhoff,

this thread started out because the information about a coordinate system was previously stored in the motion.json file. The motion.json file is intended to cover metadata for a single tracking system. It might be, that one tracking system records data from different channels, that has to be interpreted in different coordinate systems. Therefore we decided to allow the specification of a related coordinate system per channel. This now should be done in the channels.json file under reference_frame (this term is newly introduced by Motion-BIDS). Within the field reference_frame various Levels of coordinate systems can be defined. These levels can be matched to each channel in the channels.tsv file. In the channels.json file we recommend using one of two options to describe the coordinate system in use:

  1. Use a pre-defined term for each of RotationOrder, RotationRule and SpatialAxes
  2. Use Description, which is intended as a free-form description of the user

To summarize, this approach allows the user to define the coordinate system per channel for a tracking system. Additionally, we allow a more free-form description if needed.

What we did not cover: The newly introduced term reference_frame is closely related to coordinate_systems in BIDS. As the use of coordinate systems in BIDS differs from its traditional meaning, this particular section was not revised in BIDS. Perhaps it would be worth reconsidering this aspect for upcoming releases, (e.g. Eye-Tracking). However, for now, this is beyond the scope of Motion-BIDS.

sappelhoff commented 1 year ago

Great, thank you very much for the summary @JuliusWelzel -- I will close this issue now and leave it up to you and Sein to decide to reopen a new, concise and targeted issue that could be about the coordinate systems.