Open stmccartneyncc opened 2 years ago
The datatypes are officially enumerated as part of the protobuf definition now. I'd argue that it would make more sense to use those and have something like:
{
"name": "bdSeq",
"timestamp": 1486144502122,
"dataType": DataType.UInt64,
"value": 0
}
Good point, I hadn't spotted that change. My comment is based on this entry from the datatype definition under Metric:
The datatype MUST be an unsigned 32-bit integer representing the datatype
However I can see now that it also says this, which is ambiguous:
The datatype MUST be one of the enumerated values as shown in the valid Sparkplug Data Types
There's not a lot of point having the DataType enum defined if it's never used(?), so I would suggest the two options are:
I'm only just starting to use Sparkplugs, so don't really feel I have the background to make any kind of recommendation. Option 1 feels like the 'right' thing to do, but I assume that breaking backwards compatibility is a no-no?
I think the backward compatibility is only that the code which is auto-generated by the Google protobuf thing will change. Actually the bytes on the wire would not change.
{ "name": "bdSeq", "timestamp": 1486144502122, "dataType": DataType.UInt64, "value": 0 }
This is not valid JSON, it would need to be with quotes around "DataType.UInt64". There is no official sparkplug JSON encoding, it is only to give a human readable idea of what the spec is trying to convey. It is tricky that we want to convey UInt64, but the payload will actually be an integer type.
Maybe instead of JSON, we could use Hjson, in which we can convey things better.
It's arguably worse since it's not JSON any more, but the protoc
command line tool for working with protobuf definitions can convert binary protobuf streams to and from a text format that it understands and is reasonably readable. That snippet above would look like this:
metrics {
name: "bdSeq"
timestamp: 1486144502122
datatype: 8
long_value: 0
}
And you convert it with command lines like this (the text above is in pbex.txt, and hd
is just a hex dumper):
$ protoc --encode=org.eclipse.tahu.protobuf.Payload --proto_path . sparkplug_b.proto < pbex.txt | hd
000000 12 12 0a 05 62 64 53 65 71 18 ea f2 f5 a8 a0 2b >....bdSeq......+<
000010 20 08 58 00 > .X.<
000014
If we're stuck in the realm of "no official JSON defintion" anyway, I'm just thinking that jumping to an already existing and usable format could be useful.
And as a bigger change too far off base for this revision (but maybe in the future rev?), if you tweak the Metric datatype from optional uint32 datatype = 4;
to optional DataType datatype = 4;
the protoc text format uses names:
metrics {
name: "bdSeq"
timestamp: 1486144502122
datatype: UInt64
long_value: 0
}
I think the payload examples in Section 6 - Payloads are incorrect. They currently (for example) look like this:
and I think they should be:
Is that right? I'm happy to go and make the changes and submit a pull request, just wanted to get agreement first that it's correct.