SeppPenner / SparkplugNet

SparkplugNet is a library to use the Sparkplug industrial IoT (IIoT) standard in .Net. It uses MQTTnet in the background.
MIT License
83 stars 42 forks source link

philosophy of known metrics #93

Open jeff-pf opened 7 months ago

jeff-pf commented 7 months ago

I am failing to see how to add birth metrics to the application known metrics - it seems like I should be able to do it in the NBirth and DBirth event handlers. Also how do I update the value of the know metrics using the NData and DData? Or is this not the right philosophy for the known metrics - do I need to in parallel also keep track of metric values?

SeppPenner commented 7 months ago

it seems like I should be able to do it in the NBirth and DBirth event handlers

No, within the constructor of the application / node. Check out the examples or my answer under https://github.com/SeppPenner/SparkplugNet/issues/92, please.

Also how do I update the value of the know metrics using the NData and DData?

This is not yet possible, but will be possible after https://github.com/SeppPenner/SparkplugNet/issues/44 is done.

SeppPenner commented 7 months ago

I hope this is answered with https://github.com/SeppPenner/SparkplugNet/issues/92#issuecomment-2022237084 already. If not, feel free to contact me again / re-open the issue(s).

shouidar commented 7 months ago

@SeppPenner: I agree with @jeff-pf: Let assume we have a SCADA Main Host application (SparkPlugBApplication) waiting for Nodes to connect. It cannot know beforehand the Metrics the Node will be publishing. It will only get this information when the Node connects, and its birth message with all the metrics it intends to report, is published. At this time the SparkPlugBAppplication shall register all the Node metrics in its store. Only during subsequent NDATA messages the application should ignore unknown Metrics.

SeppPenner commented 7 months ago

@SeppPenner: I agree with @jeff-pf: Let assume we have a SCADA Main Host application (SparkPlugBApplication) waiting for Nodes to connect. It cannot know beforehand the Metrics the Node will be publishing. It will only get this information when the Node connects, and its birth message with all the metrics it intends to report, is published. At this time the SparkPlugBAppplication shall register all the Node metrics in its store. Only during subsequent NDATA messages the application should ignore unknown Metrics.

I misunderstood this before, but you're absolutely right. This won't work. I need to change the logic for the applications.

jeff-pf commented 7 months ago

My temp solution was to add a public function to metric storage to be able to add the birth metrics.

public void AddVersionB_BirthMetric(T metric, Metric versionBMetric)
{
    // Check the name of the metric.
    if (string.IsNullOrWhiteSpace(versionBMetric.Name))
    {
        // Check the alias of the metric.
        if (versionBMetric.Alias is null)
        {
            throw new InvalidMetricException($"A metric without a name and an alias is not allowed.");
        }

        // Check the value of the metric.
        if (versionBMetric.Value is null)
        {
            throw new InvalidMetricException($"A metric without a current value is not allowed.");
        }
    }
    else
    {
        // Check the value of the metric.
        if (versionBMetric.Value is null)
        {
            throw new InvalidMetricException($"A metric without a current value is not allowed.");
        }

        // Check the alias of the metric.
        if (versionBMetric.Alias is null)
        {
            // Hint: Data type doesn't need to be checked, is not nullable.
            this.knownMetricsByName[versionBMetric.Name] = metric;
        }
        else
        {
            // Hint: Data type doesn't need to be checked, is not nullable.
            this.knownMetricsByAlias[versionBMetric.Alias.Value] = metric;

        }
    }
}

I am creating a birth metric list in the birth event handler and then the application adds them.

jeff-pf commented 7 months ago

Going down the rabbit hole a little further - I see a few other issues also.

1) if the metric storage is meant to be the location of data updates - I do not see a way that it is going to happen - I can see 2 ways to handle this - one is automatically and the other is to allow the application to update the values (see point 3 below)

2) the metrics are not tied to a node/device - if I have multiple of the same device on a node there will be the same metrics published but there is not a way to tie them to the node/device. Same for multiple nodes with the same devices. My past solution was to add topic to the metric.

3) there may be nodes or devices that are not needed by the application - for example in Ignition SparkplugB setup the node / device names that data is collected from are entered

I am happy to help think through this - let me know if there is anything I can help with.

jeff-pf commented 7 months ago

Rules for the Node Control/Rebirth metric from the 3.0 spec (in ref to issue #44 - but also relevant here - just to clarify)

• [tck-id-operational-behavior-data-commands-rebirth-name] An NBIRTH message MUST include a metric with a name of Node Control/Rebirth. • [tck-id-operational-behavior-data-commands-rebirth-name-aliases] When aliases are being used by an Edge Node an NBIRTH message MUST NOT include an alias for the Node Control/Rebirth metric. ◦ This is to ensure that any Host Application connecting to the MQTT Server is capable of requesting a rebirth without knowledge of any potential alias being used for this metric. • [tck-id-operational-behavior-data-commands-rebirth-datatype] The Node Control/Rebirth metric in the NBIRTH message MUST have a datatype of Boolean. • [tck-id-operational-behavior-data-commands-rebirth-value] The Node Control/Rebirth metric value in the NBIRTH message MUST have a value of false. A Rebirth Request consists of the following message from a Sparkplug Host Application with the following characteristics. • [tck-id-operational-behavior-data-commands-ncmd-rebirth-verb] A Rebirth Request MUST use the NCMD Sparkplug verb. • [tck-id-operational-behavior-data-commands-ncmd-rebirth-name] A Rebirth Request MUST include a metric with a name of Node Control/Rebirth. • [tck-id-operational-behavior-data-commands-ncmd-rebirth-value] A Rebirth Request MUST include a metric value of true.

Upon receipt of a Rebirth Request, the Edge Node must do the following. • [tck-id-operational-behavior-data-commands-rebirth-action-1] When an Edge Node receives a Rebirth Request, it MUST immediately stop sending DATA messages. • [tck-id-operational-behavior-data-commands-rebirth-action-2] After an Edge Node stops sending DATA messages, it MUST send a complete BIRTH sequence including the NBIRTH and DBIRTH(s) if applicable. • [tck-id-operational-behavior-data-commands-rebirth-action-3] The NBIRTH MUST include the same bdSeq metric with the same value it had included in the Will Message of the previous MQTT CONNECT packet. ◦ Because a new MQTT Session is not being established, there is no reason to update the bdSeq number • After the new BIRTH sequence is published, the Edge Node may continue sending DATA messages.

jeff-pf commented 7 months ago

I made this change so I can publish a node rebirth command from the application

protected override async Task PublishNodeCommandMessage(IEnumerable<Metric> metrics, string groupIdentifier, string edgeNodeIdentifier)
{
    if (this.Options is null)
    {
        throw new ArgumentNullException(nameof(this.Options), "The options arent't set properly.");
    }

    bool rebirth = false;
    List<Metric> cmdMetrics = new List<Metric>();
    foreach (var metric in metrics) {
        if ((metric.Name == "Node Control/Rebirth") && (metric.Value is true)) { 
            rebirth = true; 
            cmdMetrics.Add(metric);
            break;
        }   
    }

    if (!rebirth)
    {
        var f_metrics = this.KnownMetricsStorage.FilterMetrics(metrics, SparkplugMessageType.NodeCommand);
        foreach(var metric in f_metrics)
        {
            cmdMetrics.Add(metric);
        }
    }

    // Get the data message.
        var dataMessage = this.messageGenerator.GetSparkplugNodeCommandMessage(
        this.NameSpace,
        groupIdentifier,
        edgeNodeIdentifier,
        cmdMetrics,
        this.LastSequenceNumber,
        this.LastSessionNumber,
        DateTimeOffset.UtcNow);

    // Increment the sequence number.
    this.IncrementLastSequenceNumber();

    // Publish the message.
    await this.client.PublishAsync(dataMessage);
}
SeppPenner commented 7 months ago

Let me think about this whole mess a few more days. I guess, how it's implemented right now is way to complicated...