robotology-legacy / yarp-wholebodyinterface

Implementation of the wholeBodyInterface for YARP robots.
1 stars 3 forks source link

yarpWholeBodySensors reading accelerometers/gyroscopes/orientations #42

Closed traversaro closed 4 years ago

traversaro commented 8 years ago

For use the wholeBodySensors interface in implementing the wholeBodyEstimator yarp Module, we need to make sure that everything is properly for reading all the sensors we are interested in.

naveenoid commented 8 years ago

I was going to push tomorrow my new sensors..can we review after that?


Naveen Kuppuswamy, PhD Post-doctoral Fellow, Cognitive Humanoids Lab, Department of Robotics, Brain and Cognitive Sciences (RBCS), Istituto Italiano di Tecnologia, Genova, Italy

On Sat, Oct 10, 2015 at 5:09 PM, Silvio Traversaro <notifications@github.com

wrote:

For use the wholeBodySensors interface in implementing the wholeBodyEstimator yarp Module, we need to make sure that everything is properly for reading all the sensors we are interested in.

  • Implement the reading of accelerometers as the one published by ETH robots (check the existing code to see how to add new "types" of accelerometers).
  • Implement the reading of gyroscopes. This type of sensor is not present at all at the moment, so it should be added before to the wholebodyinterface.
  • Implement the reading of "orientation" sensors (i.e. the output of a lower level estimation of the orientation of a link). This type of sensor is not present at all at the moment, so it should be added before to the wholebodyinterface. For this type of sensors is also important to decide the serialization: the measurement itself is an element of SO(3) (i.e. the group of rotation matrices) but at the moment the wholeBodySensors interfaces supports only sensors measurements as elements of R^n . We must pick a serialization to R^n (such as some kind of rpy angles or unit quaternions) or change the interface (that could make sense, but perhaps involve more work).

— Reply to this email directly or view it on GitHub https://github.com/robotology/yarp-wholebodyinterface/issues/42.

traversaro commented 8 years ago

:+1:

iron76 commented 8 years ago

I think for now we can avoid considering the "orientation" sensors since orientation will be only the result of an estimation process in wholeBodyEstimator.

traversaro commented 8 years ago

The "orientation" sensors are necessary for the seesaw-scenario estimation. I'll hope to update https://github.com/robotology/codyco-superbuild/wiki/Road-to-BERDY in the near future with what we discussed in last month meeting.

iron76 commented 8 years ago

The thing is there exists nowadays no such a thing as orientation sensors. There are sensors that, via some processing (e.g. Kalman filter), will give you an estimation of the attitude. Philosophically an estimation is different from a sensor. I see that form the software point of view you prefer inserting the estimated orientation among the sensors but I am just asking if this is the only possibility you see. Don't forget that soon we might provide our own estimation.

naveenoid commented 8 years ago

I agree too that such "created" sensors normally should not be there. The only issue is from a compatibility perspective, the IMU returns orientation and we must decide whether to treat it as a direct sensor source. Several humanoids use an IMU.

traversaro commented 8 years ago

I started writing this reply and I was experiencing a deja-vu... then I remembered that I already wrote an issue on this topic : https://github.com/robotology/wholebodyinterface/issues/4 . : )

I agree (up to a certain extent) there is no such thing as a "orientation" sensor. Strictly speaking even the "accelerometer" is actually a force sensor, that via some processing (e.g. inverting the dynamics of a spring-mass system) will give you an estimation of the proper acceleration. And even the force sensor itself is actually a deformation sensor and so on so forth.

Jokes apart, we will need this "output of lower level estimation processes" as input to our estimators.

Use case that involve using the "output of lower level estimation processes" involve:

Strictly speaking even if we need them, this are not sensors and so they should not be included in the "Sensors" interface. We could then create a separate "LowerLevelEstimationOutputsInterface" to deal just with this kind of outputs, but what would the benefit coming from this additional complexity? Perhaps slightly overloading the sense of the "SensorsInterface" is the best tradeoff. Trading lower complexity for a slightly changing the semantics of "Sensor" is exactly the choice that the architects of the Android Sensor API [1] did :

The Android sensor framework lets you access many types of sensors. Some of these sensors are hardware-based and some are software-based. Hardware-based sensors are physical components built into a handset or tablet device. They derive their data by directly measuring specific environmental properties, such as acceleration, geomagnetic field strength, or angular change. Software-based sensors are not physical devices, although they mimic hardware-based sensors. Software-based sensors derive their data from one or more of the hardware-based sensors and are sometimes called virtual sensors or synthetic sensors. The linear acceleration sensor and the gravity sensor are examples of software-based sensors.

As you may have imagined I am in favor of keeping the "orientation" sensor, but obviously any alternative solution is welcome.

[1] : http://developer.android.com/guide/topics/sensors/sensors_overview.html

traversaro commented 8 years ago

Note: reply originally left on https://github.com/robotology/codyco-modules/issues/146 , but it is more relevant here and I will link it on that discussion.

I would expect to be possible to configure the yarpWholeBodySensors by simply specifing the "name" of the sensor(s) that you want to read, using the addSensors method (see http://wiki.icub.org/codyco/dox/html/classyarpWbi_1_1yarpWholeBodySensors.html). The name can be based on the name of the boards (see http://wiki.icub.org/wiki/Distributed_Inertial_sensing) i.e. the accelerometer of board 1B8 on link l_forearm could be named l_forearm_1B8_acc (just a proposal, think about a good convention).

Once we define the naming for the sensors (that should be used consistently in the URDF) we then need to map the sensor name on how the sensors is published on the YARP network. I don't remember the details of how MTB accelerometers are published on the yarp port, but I think that just having one code identifying the accelerometer and the port will be sufficient.

This information can be placed in the yarpWholeBodySensors configuration file, under the WBI_YARP_ACCELEROMETERS section, for example(similarly to what we have already for FT sensors and IMU in [WBI_YARP_IMU_PORTS] and [WBI_YARP_FT_PORTS], see https://github.com/robotology/yarp-wholebodyinterface/blob/master/app/robots/iCubGenova01/yarpWholeBodyInterface.ini#L63 ):

[WBI_YARP_ACCELEROMETERS]
l_forearm_1B8_acc = ("mtb_eth","/icub/portname","code")

(this is just an example, how to implement the configuration format is not a-priori determined by anything).

Given that accelerometers can be read from several sources (mtb eth, mtb can, usual "inertial port") for the accelerometers it is necessary to specify the "type", in fact, I had already commited the support for reading accelerometers from the usual icub /inertial port, see https://github.com/robotology/yarp-wholebodyinterface/blob/master/src/yarpWholeBodySensors.cpp#L462 (not documented, sorry :( ) . This code can be expanded to support also the MTB accelerometers.

jeljaik commented 8 years ago

Once Marco Accame sent me this for me to understand which board number correspond to each MTB sensor. Probably you won't like this, but we might include this dependency on icub-firmware-shared in order to make this automatic? What do you think @traversaro?

eOas_inertial_position_t

contains a unique id for every possible inertial sensor positioned on iCub. So far we can host up to 63 different positions. The actual positions on iCub are documented on http://wiki.icub.org/wiki/Distributed_Inertial_sensing where one must look for the tags 10B12, 10B13 etc. The mapping on CAN for the ETH robot v3 is written aside

traversaro commented 8 years ago

Mhh.. I would avoid the dependency on icub-firmware-shared... even because anyway we should link those numerical identifiers to the URDF sensor names, so we would still need a string <---> int mapping somewhere, be it in a configuration file or in the code.

jeljaik commented 8 years ago

Alright, I was just saying, because also when Marco Accame showed me that file, he told me that it was still sensitive to modifications (just hoping one day we won't find discrepancies)

traversaro commented 8 years ago

I think that is useful to document how to read MTB acc/gyros on ETH robots on this wiki page: http://wiki.icub.org/wiki/Distributed_Inertial_sensing . cc @naveenoid @jeljaik

jeljaik commented 8 years ago

Also remember this piece of code where I parse the acc data: https://github.com/robotology/codyco-modules/issues/146#issuecomment-159620350

traversaro commented 4 years ago

As yarp-wholebodyinterface has been deprecated, we will not solve this issue. However, the spiritual successor of this interface are the MultipleAnalogSensors YARP interfaces, extensively discussed in https://github.com/robotology/yarp/issues/1526 .