clariusdev / motion

IMU Technical Information and Examples
https://www.clarius.com
BSD 3-Clause "New" or "Revised" License
5 stars 1 forks source link

Understanding Sensor Readings and Co-Ordinate systems #7

Closed benjaminralex closed 1 year ago

benjaminralex commented 2 years ago

Hello,

I am currently trying to use the sensor readings from the Clarius probe and have the following questions. I can't seem to find any relevant documentation or explanations on Git. I have attached a sample imu.txt (it is the text version of the yml file) file with the question. The scan in concern was a very simple one. The probe was held on a table face down. It was then moved front, right and up in as straight a trajectory as possible.

The following are my questions:

1) Is the quaternion format (wxyz) or (xyzw)? Corresponding to q0, q1, q2 and q3.

2) Why is the first quaternion not one that yields an identity rotation matrix? or close to identity? It seems like the first quaternion indicates that the probe was held at some angle.

3) Relating to the above question, what is the co-ordinate system with which the orientation is defined i.e. the quaternion reports the orientation of the probe relative to which fixed co-ordinate system?

4) What is the physical location and orientation of the co-ordinate system attached to the sensor?

5) Is the co-ordinate system consistent across the accelerometer, gyroscope and magnetometer?

6) When reconstructing or displaying track frames, how do you determine the position of the frames from the raw readings? It is impossible to create tracked frames or volumes without the position info.

Any and all guidance and help on this would be much appreciated. We would really like to use the capabilities of the Clarius probe, but as it stands now, this seems like crucial documentation that is altogether missing.

Thanks! imu.txt

julien-l commented 2 years ago

Sorry for the late answer.

  1. Is the quaternion format (wxyz) or (xyzw)? Corresponding to q0, q1, q2 and q3.

    • q0 = w (real) component of quaternion
    • q1 = i component of quaternion
    • q2 = j component of quaternion
    • q3 = k component of quaternion
  2. Why is the first quaternion not one that yields an identity rotation matrix? or close to identity? It seems like the first quaternion indicates that the probe was held at some angle.

Ensure the IMU sensor is calibrated: connected to probe then go to Menu > Settings > Motion Sensor.

  1. Relating to the above question, what is the co-ordinate system with which the orientation is defined i.e. the quaternion reports the orientation of the probe relative to which fixed co-ordinate system?
  2. What is the physical location and orientation of the co-ordinate system attached to the sensor?

See https://github.com/clariusdev/motion#sensor-location. The diagram describes the coordinate system for probe V1 models, it is similar for newer models. Note: the Y axis goes away from the LED indicator. I don't have the exact sensor location for newer models. Maybe @clariusk can provide it.

axes-v1

  1. Is the co-ordinate system consistent across the accelerometer, gyroscope and magnetometer?

Yes they are all in the same IMU sensor.

Sensor specifications for V1 and HD: https://www.st.com/resource/en/datasheet/lsm9ds1.pdf

  1. When reconstructing or displaying track frames, how do you determine the position of the frames from the raw readings? It is impossible to create tracked frames or volumes without the position info.

Image and IMU acquisition are asynchronous, there is no guarantee you will get an IMU measurement for each frame. Use the timestamps to correlate the images with an orientation and interpolate the orientation as necessary. All timestamps are expressed in the same time reference (the reference time point however is unspecified).

benjaminralex commented 2 years ago

Thanks for the answers. For question 6, it still doesn't answer how one gets the position information. I understand the orientation information, but reconstruction of a volume requires the position. How do you convert the accelerometer readings to positions internally? This is crucial information to have and without it, it is impossible to display 3D structures.

julien-l commented 2 years ago

How do you convert the accelerometer readings to positions internally?

We do not compute any position internally.

benjaminralex commented 2 years ago

But do have any code to do so when doing a full 3d reconstruction?

On Thu, Aug 25, 2022, 12:34 PM Julien Lemaitre @.***> wrote:

How do you convert the accelerometer readings to positions internally?

We do not compute any position internally.

— Reply to this email directly, view it on GitHub https://github.com/clariusdev/motion/issues/7#issuecomment-1227506574, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQTVI3VEZLRJ7J4KMDI4L73V26ODFANCNFSM563J4XIA . You are receiving this because you authored the thread.Message ID: @.***>

julien-l commented 2 years ago

No sorry we have no code for this. If you are interested in localization you could consider optical or electromagnetic trackers like those manufactured by NDI.

clariusk commented 2 years ago

note IMU data should not be used to try and generate 3D volumes, it can however be used as an input combined with other methods such as cameras or other tracking sources.

IMU information does however provide great and relatively accurate data on position within a 3D space, which we already provide published source code for

benjaminralex commented 2 years ago

I agree that imu only localization is pretty bad. But the clarius ultrasound probe only has an accelerometer and hence the only option is integration to get position? And the source code that you are referencing does this integration?

clariusk commented 2 years ago

yes, and sorry I should have been more accurate in my previous comment, orientation in 3D space, as opposed to position, as dealing with drift becomes impossible to deal with after a short amount of time

the 9DOF data (accel, gyro, mag) can be used in raw format, or we have converted it also into a quaternion format (with any calibrations applied) , which can be fed into orientation calculations. https://github.com/clariusdev/motion/blob/master/pyimu/pyimu.py shows an example of this - specifically the addTransform function

benjaminralex commented 2 years ago

Hey Kris,

Thank you for the clarification. The orientation info is really good and readilt useable but from what you're saying, it seems like the probe doesn't have any current functionality to report position information?

It can only output raw imu values and the user is responsible for any form of fusion? Though with an imu only, it is impossible to run any reasonably accurate position estimation.

Is my understanding correct?

Thanks!

On Fri, Aug 26, 2022, 3:48 PM Kris Dickie @.***> wrote:

yes, and sorry I should have been more accurate in my previous comment, orientation in 3D space, as opposed to position, as dealing with drift becomes impossible to deal with after a short amount of time

the 9DOF data (accel, gyro, mag) can be used in raw format, or we have converted it also into a quaternion format (with any calibrations applied) , which can be fed into orientation calculations. https://github.com/clariusdev/motion/blob/master/pyimu/pyimu.py shows an example of this - specifically the addTransform function

— Reply to this email directly, view it on GitHub https://github.com/clariusdev/motion/issues/7#issuecomment-1228862311, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQTVI3SPMSLUZB6EYHASQD3V3ENQNANCNFSM563J4XIA . You are receiving this because you authored the thread.Message ID: @.***>

clariusk commented 2 years ago

yes, correct, there's lots of active research in this space to create volumes based on fused data, and perhaps even IMU alone with enough tricks or even pixel tracking:

since, we at Clarius are not experts in 3D at all, we are just able to provide the IMU data that hopefully temporally correlates relatively well with the images produced.