Open Whathecode opened 1 year ago
I agree - in general there seems to be some redundancy in the data stream - also the measure type is stated several times.
I presume you mean that the polymorphic type descriptor of individual data points duplicates the data stream data type on data uploads:
"batch": [
{
"dataStream": {
"studyDeploymentId": "3de35fb5-c39e-4fea-8ccc-3a70ca618ce7",
"deviceRoleName": "Device",
"dataType": "dk.cachet.carp.stubpoint" // DEFINES TYPE
},
"firstSequenceId": 0,
"measurements": [
{
"sensorStartTime": 0,
"data": {
"__type": "dk.cachet.carp.stubpoint", // REDUNDANT; ALWAYS THE SAME
"data": "Stub"
}
}
]
}
]
I've made a separate issue for this, as they are quite distinct.
However, I've marked both of these as enhancements. It may be relevant if we need to reduce payload sizes (and even then some other approaches may be preferred first such as compression). But, I believe optimizing this may come with added complexity to the serializers (all potential target platforms). Implementing custom serializers in core will work for both backend and JS/TS, but you'd still need to do something similar in dart.
A
TriggeredTask
always gets uploaded using aDataStreamPoint
, which already containstriggerIds
anddeviceRolename
.In the case of
TriggeredTask
data points, the "reason" for collecting it will always be just the single trigger which triggered the task. Therefore,triggerIds
should always just contain the ID of the trigger. This makestriggerId
inTriggeredTask
redundant.Similarly,
deviceRoleName
always corresponds.