Closed jcarrano closed 5 years ago
There's a reason given — 8-bit controllers. But I think 8 bit MCUs can be considered dead, except Arduino and simple peripheral controllers, usually running bare metal. So at least val
can be changed to int32_t without regret.
Still, floating point should not be used.
There's a reason given — 8-bit controllers.
I understand that is costlier to manipulate 32-bit values in a 8-bit MCU (though it can still be done). But even if one has a 8 bit MCU, it is not up to us to force the use of lower resolution. If some user really cannot afford to use 32 bits he can always resort to patching the code (after all, if he is has such special requirements, he can afford some extra effort).
There is another option (maybe it's a bad idea, but I'll spit it out anyways) and that is to left-align values (like what the i2s protocol does). This means that if a sensor has a 16 bit range, this is left-shifted 16 bits more to make the 32 bit saul value. If it has 24 bits, it is shifted by eight. By doing this, the resulting value can be reinterpreted as fixed point number between -1 and 1 which should simplify handling.
Still, floating point should not be used. Agreed. I think 32 bit integers cover all (reasonable) cases.
I'd like to see an interface with two groups of functions: one to get data and one to get metadata.
I will do three things here:
phydat
-> as you need to understand it in order to do judgementsphydat
was designed as an interchangeable, self-contained data structure for holding concrete (physical) data, representing some sort of state in the 'real world' - with a focus on memory efficiency and ease of use. Clearly we had to make a trade-off between precision and memory footprint, and using 'int16` seemed to be a good choice, as almost no sensor that we know (or better knew back than) is able to produce meaningful values that do not fit in that range. In most cases, the precision in the least significant bit(s) can further be pretty safely cut out, as they show more jitter and noise as actually usable data (e.g. look at the LSB of any accelerometer we support...).
Don't get me wrong, I am not saying that we can/want always to ignore parts of the measured data or in other words always want to be imprecise - but for many many use cases, especially in the IoT, this is a fully valid behavior. So phydat
could maybe be regarded as kind of a lossy compression format for physical data.
One very important point I'd like to underline once more: phydat
was not designed for holding RAW data of any kind, but for declaring self-contained 'chunks' of converted, physical data. And this format is not meant to use as a default base for everything (sensor/actuator) data related thing we do in RIOT, but as a minimal viable interchange format, that is easy to use and has low memory overhead, but still valid for a large part (>90%) of use cases. When more precision is needed, one can always fallback to using the concrete sensor interface directly, handling the data manually in whatever data type needed.
Also note worthy: phydat
(in conjunction with SAUL), is designed to enable application developers to write code independent of the concrete sensor/actuator devices used. But similar to the periph interfaces, this approach will never be able to service all special function that certain devices might have - as well as some particular precise devices. So here using the sensor interface directly makes probably more sense...
So let my try to address the issues raised above in some more detail:
16 bit is too little for some applications
Please provide one or two concrete examples on high-level, functional use cases (in the context of physical data), where a higher precision is actually mandated by the application. It would be good if we can collect a few examples to help us for verifying any improvements/changes that we will do...
ADCs meant for scales, for example, usually have 24 bits. Other applications also demand higher that 16 bits. See #10151.
Yes, but ADCs give pure RAW values, that do not have any meaning without conversion/mapping (and as stated above this is not what phydat
was designed for). So thinking of scales, (be it for typical kitchen scales, bathroom scales, or even huge scales to weight gain trucks, etc), it seems to me that for most use cases the precision of int16
should suffice. For example, we can map 0-320kg with a precision down to 10g into that data type -> I guess that beats the typical bathroom scale already...
16 bits is equivalent to 4,8 decimal digits, take 1 bit for the sign and you are left with 4,5.
Yes, but see comments above.
The format is some weird home-grown decimal floating point.
Incorrect: it is pretty standard, base-10 fixed point notation. It has nothing to do with floating point, and it is not home-grown. And I hope we all agree that using float
in phydat
is not an option (-> extremely inefficient on MCU's without hard-float units...).
This makes it super hard and inefficient to manipulate.
That is your subjective opinion. We can do agree that base-10 calculations are slightly more ineffecient than base-2 (mul and div vs. shift), but on modern MCUs this should be in the order of a single digit number of CPU cycles for typical data conversions. But from a semantic perspective, I have seen that developers can far more easily understand (and therefore use) base-10 fixed point numbers compared to base-2 ones. This way, RAW data conversions as typically given in datasheets can directly be implemented...
See this piece of code to get an idea of what I mean.
Please take care not to mix up concepts: the code you refer to is converting floating point into fixed point numbers.
One has to use divisions and multiplications to convert (that is why regular floating point is base-2, one can use bit shifts.
Yes, but see comment above: base-2 fixed point arithmetic might be saving a CPU-cycle or two, but is much harder to use for most developers...
In most case, people will just truncate digits off their measurement to make them fit in 16 bits (the code I showed before does that). This is wrong. The correct thing would be convergent rounding.
The world is not black and white, so using 'This is wrong' in this context is just, ähh, 'wrong'. Again, it comes down to the trade-off between precision and (runtime-, memory-)efficiency. Convergent rounding does not come for free! So it depends very much on the context/use case what the best solution is...
The unit is transmitted with every value. What sensor in this world changes it's unit from one measurement to the next?
Please try to understand the concept of phydat
: it is meant to be self-contained (thus the unit field). This allows us e.g. to write data processing chains out of generic modules, that use this field for validating their input data. Quote phydat.h
:
* The idea is to enable different sensor/actuator drivers and other RIOT
* modules to exchange and have the same view on this kind of data. Labeling
* data with a unit type it's scaling makes it possible to pipe data between
* modules in an automated fashion without the need of specialized software
* wrappers and/or data normalization modules.
PHYDAT_DIM hard coded to 3. Impossible to transmit more than 3 values.
Again, its a trade-off between versatility and efficiency. One major point is to have a structure of fixed size, as this vastly improves efficiency.
Does anyone know any sensor/actuator device that produces more than 3 dimensions of data? Again, would be good to have concrete exmaples
Lots of waste when transmitting one value.
not lots
, but exactly 32 bit of waste when using 1-dimensional data. This was one more reason to use 16-bit ints, as this limits the 'waste' to an acceptable amount while having all the benefits from a fixed-size data type.
So here are some options I see, that we could do to improve 'phydat' and address at least some of the raised issues:
int32_t
as type for the val
field. Pro: we double the precision, a lot of scaling would not be needed anymore, would potentially simplify data handling and conversion. Cons: sizeof(phydat_t)
would grow from 64 to 112, while 'wasting' 64 instead of 32 bit for 1-dimensional data. Also it does not align so nicely anymore.phydat
to use different types internally internally by making val
a union
(see https://github.com/haukepetersen/RIOT/tree/opt_phydat_makemoreversatile) for some very early try i did a time ago. This way, one could for example use a uint32_t val
instead of int16_t val[3]
for RAW values. We could e.g use 2 or 3 bit from the unit field to annotate the data type that is used.phydat
(and possibly SAUL
), that is designed towards precision in the first place.Any more suggestions?
So sorry for the long reply, but I hope I could make some design choices more clear. So lets see what we can do to make RIOT a better home for sensors and actuators :-)
IMUs, energy meters and some other sensors do not fit into phydat format as it doesn't allow different units to be measured simultaneously.
Simultaneous sampling ADCs and color sensors may have a lot of channels ("dimensions") to be measured at once, capacitance sensors have 4-6-8 channels, multichannel DAC applications often need all the values to be loaded simultaneously... Even with regular ADCs with multiplexed channels it is often needed to sample all channels as fast as possible.
Data from various sensors (with different units and/or scales) may need to be combined, e.g. temperature readings associated with sensor data.
Actually, phydat is suitable for simple sensors and simple measurements only, I always regarded it as some kind of toy, Arduino-like feature, not actually useful in real-world applications.
@haukepetersen
Thanks for the clarifications.
almost no sensor that we know (or better knew back than) is able to produce meaningful values that do not fit in that range.
As is usually the case, the time has come where past assumptions are no longer valid.
There are other use cases for high-resolutions ADCs, specially when measuring weak and slow signals like bioelectrical signals.For example it can simplify the input circuitry, by avoiding the need for DC-decoupling the few mV introduced by electrode contacts.
In most cases, the precision in the least significant bit(s) can further be pretty safely cut out, as they show more jitter and noise as actually usable data
The least significant bits that are "dancing around" actually carry valuable information. See this article and this application note. In fact, it is possible to measure signals well below the noise level of the converter.
phydat was not designed for holding RAW data of any kind, but for declaring self-contained 'chunks' of converted, physical data
I'm not saying phydat should die, but that it should not be the primary interface to the sensor.
Yes, but ADCs give pure RAW values, that do not have any meaning without conversion/mapping (and as stated above this is not what phydat was designed for)
I understand, but that means that phydat (and thus SAUL) is not to be used except for the very last end of the data pipeline. Once SAUL is removed, the only thing left is a jungle of different APIs for different sensors.
Please take care not to mix up concepts: the code you refer to is converting floating point into fixed point numbers.
The code I showed converted to/from floating point from/to phydat_t. If I was using integers instead of floats, the code would be even more complex. Of course, if someone has a better/simpler algorithm, I'd love to see it.
Convergent rounding does not come for free!
That's why I propose 32 bits, so that there is no rounding done and the user is free to manipulate the data how he pleases. The problem with the current API is that there's no way to opt out of the automagical transformations without having to resort to device-specific APIs.
Please try to understand the concept of phydat: it is meant to be self-contained (thus the unit field). This allows us e.g. to write data processing chains out of generic modules
We already saw this is harder to do with phyday. Let's imagine one has a scale. One starts with voltage measurements from a 24 bit converter and must work his way to grams. One cannot write this chain entirely in the phydat domain because right from the start the ADC gets reduced to a 16 bit one.
Also, when processing data one needs a few additional bytes than the intended resolution.
we could switch to base-2 fixed point numbers, but this would IMHO be harder to use by developers while on gaining a minimal performance gain.
Why? To make an analogy: is binary harder to use than binary coded decimal? If you are programming a 7 segment display yes, for other uses no.
I think that whatever is not supported by the language is harder to use.
In the succeeding exposition, everywhere "sensor" appears, it can be replaced by "actuator".
As I said before, there is no need to kill phydat. In fact, as a matter of backwards compatibility, I'd want it to be kept as is.
I don't see the need to have an all-in-one data structure.
What are the characteristics of physical data?
Note that these three items dimension and shape will certainly never vary. The scale varies according to how the measuring element is set. Additionally, real-world measured data has a precision, and digitized data has word size. To this we can add the range of the sensors.
According to this analysis, it now seems as the amount of metadata contained in phydat is too little!
We can regard the sensor (or actuator) as an input (or output) device, like a network or serial device may be. Then there is no reason to limit data transfers to one item at a time.
As a rule, no sensor may change it's scale, shape, dimension or range by itself. The same goes for word size, though in this case I'd argue that it should never change.
This means that the data can be transferred almost raw, and the user can always request the metadata if he so wishes.
To express dimension, a 7-tuple of base units can be specified, each element representing the exponent of a SI base unit (m, kg, s, A, K, mol, cd)- mol is probably not needed. For example g (for an accelerometer) would have dimension (1, 0, -2, 0, 0, 0) and a light sensor would be (-2, 0, 0, 0, 0, 0, 1), that is lux.
A single read function that takes in a buffer and fills it with samples (of known word size) should be able to accomodate any sensor.
For example (please keep in mind this is just a quick sketch to illustrate the point), something like
int io_read(int32_t *buf, size_t *count);
If the device is a 3 axis accelerometer for, then buf will be filled with data triplets. If it is a scale, with scalar values. In any case, the metadata can be queried. Sensors with smaller word size can left-align the data samples.
If this is considered too wasteful, then the API could have several functions for 8, 16, and 32 bit. If a read function with less bits is used in a sensor with more bits, the result is truncated and if a function with more bits is used in a smaller sensor the result is left aligned (a.k.a right-padded). The nice thing is that for the user it just works, and he can always query the preferred word length for the sensor and use the right function.
Because all metadata is available, more user friendly layers (including conversion to phydat) can always be added on top.
I just saw #10156. It wants support for a 6 channel device.
PS: sorry for my bible-sized comment.
@jcarrano there are a lot of cases with non-uniform data structures — with varying dimensions.
1) simultaneous measurements — e.g. if you have 6-axis gyroaccelerometer, for precise inertial navigation system you have to have rotation and acceleration data tightly synchronized, which means obtained by a single request to the device
2) further data processing — somewhere on a data processing pipeline you may need to combine data from various sensors in a single structure
Still, I can't imagine how any fixed structure can accommodate all of this, so I think maybe we should left SAUL/phydat as it is, but clearly state in the documentations that it's usefulness is limited and provide examples when and where SAUL should not be used.
As for now, I can't find any reasonable explanation what SAUL is and how it can be used at all.
@olegart
I know case (1) quite well. In fact, it is partly what prompted me to question phydat. In my proposal, non-uniform data can be described if meta-data is not global to all channels, but instead each channel has its own metadata.
Still, I can't imagine how any fixed structure can accommodate all of this.
It can't. Therefore I suggest that reading from a sensor just fills a buffer with data. The only requirement is that all elements of this buffer are equally sized.
@jcarrano
Metadata array and data array then. Metadata is something like
typedef struct {
uint8_t index; /* index in data array */
uint8_t size; /* size of data chunk, bytes */
int8_t scale; /* scale, base 10 */
uint8_t units; /* dimension */
} phymeta_t;
phymeta_t phymeta[MAX_ELEMENTS];
And functions to
Wouldn't be cheap but can be used to construct and process real data.
I would consider using a 32 bit float for the scale. Neither soft or hard FP are needed for this. It can be defined in such a way that it can be interpreted as a float, so float support comes for free (an important feature when more MCUs are including hard-fp). For the usage that we want to give it (a multiplicative constant), handling floats in software is no more difficult than decimal (and probably more efficient), provided that set a rule that the constant cannot be INF or NAN or subnormal.
Going back to the example of the IMU, raw data (that does not correspond to any physical unit) is preferable. One is doing some sort of sensor fusion algorithm to obtain orientation. This is essentially a filter with a set of constants. The constants that convert raw data into physical values can be combined with the filter constants. Not only does this preserve precision, it is also more efficient. In this case, one can have the "ideal" filter coefficients stored in physical units (which often do not result in the nicest values- i.e. very big or very small) with whatever precision is needed. These get combined with the scale if the input variables (which again may be "ugly") to form the "real" coefficients. These "real" coefficient should (as a matter of numerical stability, if you designed it well) be "nice" numbers.
This piece of code, taken from the Berkeley SoftFloat shows how easy it is to pack and unpack floats:
#define signF32UI( a ) ((bool) ((uint32_t) (a)>>31))
#define expF32UI( a ) ((int_fast16_t) ((a)>>23) & 0xFF)
#define fracF32UI( a ) ((a) & 0x007FFFFF)
#define packToF32UI( sign, exp, sig ) (((uint32_t) (sign)<<31) + ((uint32_t) (exp)<<23) + (sig))
What if channels are described by an array of pointers to struct instead of an array of struct?
WARNING Super crude PoC follows
enum BASE_UNITS {
UNIT_M, UNIT_KG, ..... /* the other 5 units */
N_BASE_UNITS
};
/* The conversion is either y = scale*x + offset or y = scale*(x+offset) */
typedef struct {
float32_t scale;
float32_t offset; /* does this need to be float32_t ? */
} lin_scale_t;
typedef struct {
int8_t dimensions[N_BASE_UNITS] ;
uint8_t class; /* because the unit does not tell us what kind of sensor it is */
int32_t range[2]; /* upper and lower range */
lin_scale_t scale; /* should this be embedded here or be a pointer? */
} channel_info_t;
/* Return the number of channels.
* The idea is to call this with *n_chans = 0 if you only want to know how many
* channels there are.
*/
int get_channel_info(device_t *dev, channer_info_t **ch_info, size_t *n_chans);
/*< Put function to query sample rate here > */
I forgot to mention, for sensors which sample data regularly, it should be possible to query the sample rate. This is super important for filtering, setting buffer sizes if one wants to process in blocks, etc.
Regarding the rounding issue: When using phydat_fit to convert raw data into a phydat_t
data structure, rounding away from zero is performed. And the scaling there is implemented quite efficiently, so the computational overhead of using a base 10 scale is not to much. So the issue with the round has been addressed already, and quite efficiently even.
Regarding the efficiency: There are even use cases where base 10 is more efficient: E.g. when "pretty-printing" the results in a human readable way, it has to be to base 10 anyway. A example for using pretty-printing is the SAUL shell command or providing sensor data in a CoAP server as plain text. I also worked with a few machines that provide raw sensor data in base 10 (e.g. in 0.01% or in c°C ("centi-degree-celsius")).
My point is: A base 2 scale is not automatically more efficient, it depends on the use case.
Maybe this TODO is a good compromise:
It might make sense to introduce additional data types for increased precision, i.e. something like phydat_float_t...
I'm not sure if a "one-size-fits-all" phydat_t
is the best solution with a precision that is "too high" for some use cases, and too low for others. (But surely, ending up with a fragmented API and 100 different phydat_t
flavors would also be sub-optimal.)
BTW: I also would like to see a phydat_accuracy_t
, which gives could give information about accuracy and precision (using this definition of accuracy and precision). Adding this to each sensor in the SAUL registry and providing worst case estimations would be quite nice :-). (This could be a constant value, so no RAM or runtime overhead.)
I have a lot of temperature sensors that have a precision (according to the data sheet) of somewhere between 0.1°C and 0.5°C, but still there value is off by +-5°C (unless they were recently calibrated). So just giving the precision would easily be misleading.
And the scaling there is implemented quite efficiently
Not so much considering the alternative (proposed in this thread) would not need any fitting. (See here)
bin/samr21-xpro/phydat/phydat.o: file format elf32-littlearm
Disassembly of section .text.phydat_fit:
00000000 <phydat_fit>:
};
#define LOOKUP_LEN (sizeof(lookup_table_positive) / sizeof(int32_t))
void phydat_fit(phydat_t *dat, const int32_t *values, unsigned int dim)
{
0: b5f7 push {r0, r1, r2, r4, r5, r6, r7, lr}
assert(dim <= (sizeof(dat->val) / sizeof(dat->val[0])));
2: 2a03 cmp r2, #3
4: d823 bhi.n 4e <phydat_fit+0x4e>
6: 000e movs r6, r1
8: 000d movs r5, r1
uint32_t divisor = 0;
uint32_t max = 0;
a: 2300 movs r3, #0
c: 0097 lsls r7, r2, #2
e: 187f adds r7, r7, r1
* be used for both positive and negative values. As result, -32768 will be
* considered as out of range and scaled down. So statistically in 0.00153%
* of the cases an unneeded scaling is performed, when
* PHYDAT_FIT_TRADE_PRECISION_FOR_ROM is true.
*/
for (unsigned int i = 0; i < dim; i++) {
10: 42bd cmp r5, r7
12: d120 bne.n 56 <phydat_fit+0x56>
#endif
}
}
for (unsigned int i = 0; i < LOOKUP_LEN; i++) {
if (max > lookup[i]) {
14: 4c29 ldr r4, [pc, #164] ; (bc <phydat_fit+0xbc>)
16: 42a3 cmp r3, r4
18: d827 bhi.n 6a <phydat_fit+0x6a>
1a: 4c29 ldr r4, [pc, #164] ; (c0 <phydat_fit+0xc0>)
1c: 42a3 cmp r3, r4
1e: d826 bhi.n 6e <phydat_fit+0x6e>
20: 4c28 ldr r4, [pc, #160] ; (c4 <phydat_fit+0xc4>)
22: 42a3 cmp r3, r4
24: d825 bhi.n 72 <phydat_fit+0x72>
26: 4c28 ldr r4, [pc, #160] ; (c8 <phydat_fit+0xc8>)
28: 42a3 cmp r3, r4
2a: d824 bhi.n 76 <phydat_fit+0x76>
2c: 4c27 ldr r4, [pc, #156] ; (cc <phydat_fit+0xcc>)
2e: 42a3 cmp r3, r4
30: d909 bls.n 46 <phydat_fit+0x46>
for (unsigned int i = 0; i < LOOKUP_LEN; i++) {
32: 2304 movs r3, #4
divisor = divisors[i];
34: 4c26 ldr r4, [pc, #152] ; (d0 <phydat_fit+0xd0>)
36: 009d lsls r5, r3, #2
38: 592d ldr r5, [r5, r4]
dat->scale += 5 - i;
3a: 79c4 ldrb r4, [r0, #7]
3c: 3405 adds r4, #5
3e: 1ae3 subs r3, r4, r3
40: 71c3 strb r3, [r0, #7]
break;
}
}
if (!divisor) {
42: 2d00 cmp r5, #0
44: d11f bne.n 86 <phydat_fit+0x86>
for (unsigned int i = 0; i < LOOKUP_LEN; i++) {
46: 2300 movs r3, #0
/* No rescaling required */
for (unsigned int i = 0; i < dim; i++) {
48: 4293 cmp r3, r2
4a: d116 bne.n 7a <phydat_fit+0x7a>
* integer division as last resort here.
*/
dat->val[i] = -((uint32_t)((-values[i]) + divisor_half) / divisor);
}
}
}
4c: bdf7 pop {r0, r1, r2, r4, r5, r6, r7, pc}
assert(dim <= (sizeof(dat->val) / sizeof(dat->val[0])));
4e: 4921 ldr r1, [pc, #132] ; (d4 <phydat_fit+0xd4>)
50: 2003 movs r0, #3
52: f7ff fffe bl 0 <core_panic>
if (values[i] > (int32_t)max) {
56: 682c ldr r4, [r5, #0]
58: 429c cmp r4, r3
5a: dd01 ble.n 60 <phydat_fit+0x60>
max = -values[i];
5c: 0023 movs r3, r4
5e: e002 b.n 66 <phydat_fit+0x66>
else if (-values[i] > (int32_t)max) {
60: 4264 negs r4, r4
62: 42a3 cmp r3, r4
64: dbfa blt.n 5c <phydat_fit+0x5c>
66: 3504 adds r5, #4
68: e7d2 b.n 10 <phydat_fit+0x10>
for (unsigned int i = 0; i < LOOKUP_LEN; i++) {
6a: 2300 movs r3, #0
6c: e7e2 b.n 34 <phydat_fit+0x34>
6e: 2301 movs r3, #1
70: e7e0 b.n 34 <phydat_fit+0x34>
72: 2302 movs r3, #2
74: e7de b.n 34 <phydat_fit+0x34>
76: 2303 movs r3, #3
78: e7dc b.n 34 <phydat_fit+0x34>
dat->val[i] = values[i];
7a: 009d lsls r5, r3, #2
7c: 594d ldr r5, [r1, r5]
7e: 005c lsls r4, r3, #1
80: 5305 strh r5, [r0, r4]
for (unsigned int i = 0; i < dim; i++) {
82: 3301 adds r3, #1
84: e7e0 b.n 48 <phydat_fit+0x48>
uint32_t divisor_half = divisor >> 1;
86: 0004 movs r4, r0
88: 0052 lsls r2, r2, #1
8a: 1883 adds r3, r0, r2
8c: 086f lsrs r7, r5, #1
for (unsigned int i = 0; i < dim; i++) {
8e: 9301 str r3, [sp, #4]
90: 9b01 ldr r3, [sp, #4]
92: 429c cmp r4, r3
94: d0da beq.n 4c <phydat_fit+0x4c>
if (values[i] >= 0) {
96: 6830 ldr r0, [r6, #0]
98: 2800 cmp r0, #0
9a: db08 blt.n ae <phydat_fit+0xae>
dat->val[i] = (uint32_t)(values[i] + divisor_half) / divisor;
9c: 1838 adds r0, r7, r0
9e: 0029 movs r1, r5
a0: f7ff fffe bl 0 <__aeabi_uidiv>
dat->val[i] = -((uint32_t)((-values[i]) + divisor_half) / divisor);
a4: b200 sxth r0, r0
a6: 8020 strh r0, [r4, #0]
a8: 3604 adds r6, #4
aa: 3402 adds r4, #2
ac: e7f0 b.n 90 <phydat_fit+0x90>
ae: 1a38 subs r0, r7, r0
b0: 0029 movs r1, r5
b2: f7ff fffe bl 0 <__aeabi_uidiv>
b6: 4240 negs r0, r0
b8: e7f4 b.n a4 <phydat_fit+0xa4>
ba: 46c0 nop ; (mov r8, r8)
bc: 1387ec77 .word 0x1387ec77
c0: 01f3fe0b .word 0x01f3fe0b
c4: 0031ffcd .word 0x0031ffcd
c8: 0004fffa .word 0x0004fffa
cc: 00007fff .word 0x00007fff
E.g. when "pretty-printing" the results in a human readable way, it has to be to base 10 anyway
When pretty printing you have enough overhead already (from printf and friends) so that the conversion will probably be insignificant. There's a reason we are no longer using binary-coded decimal.
The point I made in the previous comment is that if the phydat format is better for some applications, it does not need to go away, but it should not be the standard interface for sensor/actuator data. It can be easily implemented on top of the raw format (while the opposite is not true).
I also would like to see a phydat_accuracy_t
This is valuable data to have. I'd not tie it to phydat, for the reasons already described. Also, I think that a get/set API similar to what GNRC has may help in keeping the API lean. With that, maybe the get_channel_info()
I proposed before could go away and be replaced by a "get" with the appropriate parameters.
Having noise estimates given by the sensor opens up the interesting possibility of having generic filters.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If you want me to ignore this issue, please mark it with the "State: don't stale" label. Thank you for your contributions.
Description
The data format used by SAUL is phydat_t:
This format has several issues:
The main problem here is that if one wants to realize the advantages of a unified interface offered by SAUL, there is no option but to use this format.
Also, while the format does not force it, one is expected to convert the values to a physical unit. This is often done with a fixed constant that depends only on the device configuration and is only relevant for display purposes. The machine does not care if the value is in inches or cm, as long as the correct conversion factors are available.