Closed jmxp69 closed 2 years ago
To expand on this, would it be possible for it to indicate which telemetry sensor it's using so the user can make an informed decision about the warning/critical values?
I realize this page is probably being rewritten, but as an example:
Or maybe even select which sensor in case they want to use RSSI/1RSS rather than LQ for the built-in alarm? 😬
I second that. The way RSSI alarm us handled is a FrSky artefact misused for other protocols. Some protocols provide RSSI, some a strangely 0-100% scaled RSSI (my physics professor would have a allergic reaction) whilst some protocols provide a much more reasonable link quality indicator of some sort. Being able to select the sensor with adjustable low/critical thresholds would be much better.
The second best solution would be to allow deactivation of the built in telemetry alarm. This would allow users to define their own mechanisms like logical switches plus special function or write a LUA script.
Selecting the sensor could prove a bit tricky, and I would like to avoid the mess with sensor "re-discovery" and the like. The way it is right now, is that each protocol provide a measurement, which is automatically used (which is good).
At best (from the implementor's POV) we should do the following:
ELRS telemetry alarms are based on RLQY which is expressed in terms of percentage/packet loss ratio. If the protocol provides RSSI that should be based on dBm. So the label should follow the protocol submitted measurement right?
I do like Rob's idea that the signal strength alarm could also indicate the sensor submitted by the protocol rather than hard coding "low alarm". That helps to remove doubt as to what the operator is seeing. I would bet a dollar 99% of the EdgeTX user base wouldn't know RQL is the value submitted with ELRS. For myself, I always just assumed that was RSSI.
Some of the mess is also caused by the fact that it is not setup for the module, but model. So in theory you could have 2 different values if you have 2 modules ON in that model.
In practice, telemetry only really works when either only one module sends it, or both modules use the same protocol. Behavior with 2 modules sending telemetry is not really properly tested/defined. However, we should prepare for that, as I intend to make it work properly.
After further thinking I'd like to distance myself from my previous comment. To excuse myself here's my reasoning
The current implementation:
telemetryData.rssi.set()
. The user selected thresholds for low/critical warnings are applied on the value the specific protocol sends in the calls of telemetryData.rssi.set()
. Examples are
Conclusion:
@mha1 thx for the very good summary! So the thing that would need fixing is actually the range of values that can be set, which is quite restricted right now (6 bits, as an offset from from the preset 42/45).
@mha1 thx for the very good summary! So the thing that would need fixing is actually the range of values that can be set, which is quite restricted right now (6 bits, as an offset from from the preset 42/45).
Exactly. I'd make it a uint8_t and allow a 0..100 selection. And please change the text to stop implying the warnings are based solely on "RSSI". Make it a neutral text. I am sure you can find something better than my "RF Link evaluation"
@mha1 thx for the very good summary! So the thing that would need fixing is actually the range of values that can be set, which is quite restricted right now (6 bits, as an offset from from the preset 42/45).
Exactly. I'd make it a uint8_t and allow a 0..100 selection. And please change the text to stop implying the warnings are based solely on "RSSI". Make it a neutral text. I am sure you can find something better than my "RF Link evaluation"
I would prefer something like "RF uplink quality", or something like that, to make the direction clear.
My personal preference would be to also have RSSI in dBm, SNR in dB and a bit error rate available.
@mha1 thx for the very good summary! So the thing that would need fixing is actually the range of values that can be set, which is quite restricted right now (6 bits, as an offset from from the preset 42/45).
Exactly. I'd make it a uint8_t and allow a 0..100 selection. And please change the text to stop implying the warnings are based solely on "RSSI". Make it a neutral text. I am sure you can find something better than my "RF Link evaluation"
I would prefer something like "RF uplink quality", or something like that, to make the direction clear.
My personal preference would be to also have RSSI in dBm, SNR in dB and a bit error rate available.
Could be done but this would be a much bigger change as each of the protocol telemetry decoders would have to send additional information about what type of data source (LQI, RSSI, SNR, ...), the unit of the data source [db, dbm, %] and expected range of values of the data source it is using to feed "RF Link quality"
Some (older) modules (e.g. FlySky) provide link quality (Sgnl) in the range of [0, 10]. Re-scaling it to 0 - 100 will not give enough resolution.
Doesn't look like this discussion ended in some form of implementable specification, does it?
If we want to cover all user requirements stated above I'd say it is necessary to extend each protocol telemetry decoder to communicate more details about "RF link quality":
Additionally the UI needs to display the communicated "RF link quality" data source type with unit and must allow setting low/critical thresholds according to the communicated expected range.
This might be a pseudo code example for a protocol using some sort of link quality indicator executed in the init section of the decoder:
telemetry.Data.RFqualityType.set(RFLQ_TPYE_LQI);
telemetry.Data.RFqualityUnit.set(UNIT_DBM);
telemetry.Data.RFqualityRangeUpper.set(100);
telemetry.Data.RFqualityRangeLower.set(0);
For a protocol using RSSI:
telemetry.Data.RFqualityType.set(RFLQ_TPYE_RSSI);
telemetry.Data.RFqualityUnit.set(UNIT_DBM)
telemetry.Data.RFqualityRangeUpper.set(-20)
telemetry.Data.RFqualityRangeLower.set(120)
@pfeerick googly eyes? what do you mean? you no like it?
Here's the list of telemetry decoders that currently set the "RSSI" value. For most it is clear what they use as data source, for some we need to find the right interpretation. So from the telemetry decoders standpoint the changes shouldn't be to difficult. class TelemetryData (notice how this is located in frsky.h) needs to be updated to hold the type, unit, upper and lower range boundary. I'd initialize this in class TelemetryData's constructor to a LQI type with unit % and range 100..0 to catch forgotten decoders. I don't know how much of a change this would mean for the UI.
@pfeerick googly eyes? what do you mean? you no like it?
Just following the conversation 😉 I like that thinking, it looks good. 😸
@pfeerick I think the UI part will be the more challenging. Twelve files use the RSSI value the telemetry deocders send with telemetryData.rssi.set(value). Remember this is most of the time not RSSI but LQI All of the assume it is a real RSSI and display RSSI sometimes with unit db(?! - thank you FrSky) like the range check pop up. Some even perform some scaling (e.g. top 5 bar indicator) and even worse implicitly or explicitly assume a 100..0 range, e.g. for luaGetRSSI() (scaled RSSI in dbm?!). Cleaning up in 2.9?
At this point, most likely 2.9 at the earliest, yes. Getting close to needing to think about bugfix only PRs and must-be-in-2.8 PRs.
Aw, so you weren't putting your hand up to work on this? :laughing: Like most parts of how the code has evolved from being (as well as being frsky centric) lots of interesting code choices were made. So it will be fun to clean it all up, but worth it in the end.
Sure, I'd be happy to work on this as it is kinda confusing right now. But I sure need some assistance for the UI parts. Not only logic but also some visible things need to be worked on.
But the most important thing is to define how things shall work. Currently "RSSI" subsumes a variety of different data sources with quite different physical properties and therefore different semantics. It's like deriving information about "Fruit" without specifying if you are talking about a banana or a tomato (yes, it's a fruit). But making the source known requires a deeper understanding of bananas and tomatos. Example range popup. It currently just identifies the source as a pseudo-RSSI with unit db (doesn't seem to bother anyone) and shows a range from 0..+100. This pop up couldn't be more wrong.
How should this pop up be presented when tackling this? If the source is a link quality indicator we'd have to present this pop up as LQI [%] with range (100..0). If it was a real RSSI we'd present the pop up as RSSI [dbm] with a range of -something..-something_smaller depending on protocol. If the source might be SNR we'd present the pop up as SNR [db] range idk etc. Same question for the "strength bars" top bar widget.
What I am getting at is that we would expect more knowledge about how the presented values need to be interpreted and how they should be presented to the user. I suppose not everyone knows -90dbm means lower power than -30dbm. Squeezing everything into one pseudo-"RSSI" creates the illusion of easy interpretation and comparability whilst offering more insight requires more knowledge. I am the worst UI usability designer, so discussion is welcome.
I think there are two different/opposite approaches:
How about updating the telemetry handlers in a way, that they all produce a cooked link quality information and where available also the protocol specific values and their respective units? Then people wanting to use a simple quality index value can use thi and people with a better understanding of RF can have the detailed information.
This means implement both options and allow user to choose. Possible of course, but this means a lot more changes for both telemetry handlers and UI. And, probably, changes in the storage.
Let's have it in 2 steps:
[0; 100]
(see #2251) in 2.8
.2.9
and up)The primary goal of what we discussed with @jmxp69 was to be able to use the built-in warnings for CRSF / ELRS. As of now, it does not really make sense, as you cannot setup values like 70%-80%
, which are currently the advised values.
Is there an existing issue for this feature request?
Is your feature request related to a problem?
RSSI Low Alarm can be set no higher than 75 and no lower than 15. RSSI Critical Alarm can be set no higher than 72 and no lower than 12.
ELRS RQLY is used for RSSI and the low alarm can safely be set to 80[%] with the critical alarm set to 70[%]. Some while 70% is within range, some users may opt for additional margin of safety and wish to use 75% or even higher.
Describe the solution you'd like
RSSI Low and Critical Alarm values should allow entries to reflect more modern protocols and not just scaled FrSky values.
Describe alternatives you've considered
Disabling telemetry alarms and using logical switch ranges with special functions instead. This is not how we behave like citizens though...(borrowing from a friend).
Additional context
No response