Open-Network-Models-and-Interfaces-ONMI / TAPI

LF ONMI Transport API Repository (TAPI)
https://github.com/Open-Network-Models-and-Interfaces-ONMI/TAPI/wiki
Apache License 2.0
95 stars 80 forks source link

Photonic model: pre-FEC and post-FEC BER inconsistent definition #380

Closed GermanoSMO closed 6 months ago

GermanoSMO commented 5 years ago

In the current FecPropertiesPac, the preFecBer and postFecBer are defined as "Integer" and described as "counters". However, this sounds a little bit inconsistent to me.

First inconsistency: by definition, the Bit Error Rate is the fractional ratio between the number of errored bits vs. the total number of received bits. Using an "Integer" to express this ratio appears a not correct choice, unless the intention is to express the "exponent" of that ratio on a base-10 scale e.g. "-3" for a 10e-3 BER. Due to the ratio nature, this will always be a negative number. If this was the intention, then the translation into Yang is not correct, since these counters are defined with "uint64" type. If instead the intention was to reconduce the fractional result to an integer using e.g. a scale factor, then the indication of the value to use for this scale factor is missing. This point is in any case to be fixed.

Second point to be clarified: the FecPropertiesPac reports one single value for preFecBer and postFecBer since they are considered "counters". If there is one single value, I assume that this will represent the "average" since it can only be evaluated on the basis of the progressive (accumulation) count. Here I have again some concern, because this approach does not cover the highlighting of error bursts/peaks. Maybe a "gauge" with tidemark indications would provide a better indication, in this case marking the best and the worst measured preFecBer and postFecBer inside the observation window. The average value may still be evaluated/reported, if deemed necessary.

nigel-r-davis commented 5 years ago

I agree that there appear to be some issues here. We need to consider each measurement property in terms of the “attributes” set out in 4.5.2 of TR-512.A.4. Obviously, not all properties have all attributes. When we did this work in the core IM teal we were considering that each property (measurement, configuration etc.) would have associated some rigorous definition that would drive tooling to expand (consistently) into the appropriate set of attributes (tooling is used to avoid hand crafting inconsistency).

Section 4.5.1 has a candidate list of properties (without rigorous definition) that would be fed into the tooling.

I have extracted the text…

4.5.2 Attributes related to each property This section attempts to rationalize which attributes will be required for each property listed above. It covers spec, configuration, measurement and reporting considerations. It also implies which will benefit from notification and hence be in some streaming telemetry. 4.5.2.1 Requested/Intended For all adjustable parameters it is reasonable to state constraints in an outcome oriented constraint based interaction. • Target (average/mean) • ToleranceLower (deviation) • ToleranceUpper (deviation) 4.5.2.2 Current/actual – measure A key consideration is the degree of change of the property. If it changes rarely then notification is reasonable, if it changes frequently or in bursts, then notification may not be sensible other than for spotlighting. • instantaneousState • instantaneousValue • averageMean • currentEventCounts 4.5.2.3 Threshold – measure Any measure may require a combination of thresholds. In some cases the best value is zero and hence only upper threshold are meaningful. • lowerWatermark • upperWatermark • LowerWarn • LowerSevere • LowerFail • NoValueWarn • NoValueSevere • NoValueFail • UpperWarn • UpperSevere • UpperFail • TimePeriod 4.5.2.4 Alarms • BooleanAlarm 4.5.2.5 Capability (requested/intended, current, threshold and alarms) For all parameters there will be definition. The degree of support for each may vary in terms of range supported etc. • Default • Range • Preference • Interaction

From: Germano Gasparini notifications@github.com Sent: 08 January 2019 08:15 To: OpenNetworkingFoundation/TAPI TAPI@noreply.github.com Cc: Subscribed subscribed@noreply.github.com Subject: [EXTERNAL] [OpenNetworkingFoundation/TAPI] Photonic model: pre-FEC and post-FEC BER inconsistent definition (#380)

In the current FecPropertiesPac, the preFecBer and postFecBer are defined as "Integer" and described as "counters". However, this sounds a little bit inconsistent to me.

First inconsistency: by definition, the Bit Error Rate is the fractional ratio between the number of errored bits vs. the total number of received bits. Using an "Integer" to express this ratio appears a not correct choice, unless the intention is to express the "exponent" of that ratio on a base-10 scale e.g. "-3" for a 10e-3 BER. Due to the ratio nature, this will always be a negative number. If this was the intention, then the translation into Yang is not correct, since these counters are defined with "uint64" type. This point is in any case to be fixed.

Second point to be clarified: the FecPropertiesPac reports one single value for preFecBer and postFecBer since they are considered "counters". If there is one single value, I assume that this will represent the "average" since it can only be evaluated on the basis of the progressive (accumulation) count. Here I have some concern, because this approach does not cover the highlighting of error bursts/peaks. Maybe a "gauge" with tidemark indications would provide a better indication, in this case marking the best and the worst measured preFecBer and postFecBer inside the observation window. The average value may still be evaluated/reported, if deemed necessary.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/OpenNetworkingFoundation/TAPI/issues/380, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AIejZ8cJjfFGfJpsKsWOVs2xVvpCXrxKks5vBFOTgaJpZM4Z09uS.

GermanoSMO commented 5 years ago

After having a look at the corresponding definitions for pre-FEC and post-FEC BER in both OpenConfig and OpenROADM projects, I would like to propose a complete solution for this issue, in terms of UML model.

1. Consistent numeric representation: The proposal is to replace in the UML the current "Integer" type to a "Real" type with property "LENGTH_64_BIT" (double). With the current UML to Yang tool, this type will be converted into a "decimal64" Yang type with 16 decimal digits. This will allow to represent a BER as low as 1x10^-16. Lowest values will be represented simply with the "zero" value.

Note that the Yang used in OpenConfig can represent a minimum BER of 1x10^-18 while for OpenROADM the lowest value is 1x10^-17. Vendors and operators are used to consider a BER of 1x10^-15 as near error-free/error-free. The conclusion is that a minimum representation of 1x10^-16 as per this proposal will still be suitable, with the advantage that there is no need to introduce any change to the UML-Yang conversion tool.

2. Conversion from single counter to triplet (gauge) The proposal is to replace in the UML model the single attributes preFecBer and postFecBer with the respective triplets:

preFecBerMin:Real[1] (LENGTH-64-BIT) preFecBerMax:Real[1] (LENGTH-64-BIT) preFecBerAv:Real[1] (LENGTH-64-BIT)

postFecBerMin:Real[1] (LENGTH-64-BIT) postFecBerMax:Real[1] (LENGTH-64-BIT) postFecBerAv:Real[1] (LENGTH-64-BIT)

and to add an attribute to indicate the length (duration) of the evaluation period (if these measurements are embedded in a standard 15min or 24h collection period, this will be redundant. An explicit attribute, however, will allow for a free definition of the duration of the observation window)

GermanoSMO commented 5 years ago

Sorry, I closed this issue by mistake

amazzini commented 6 months ago

Solved in 2.5.0