opengeospatial / sensorthings

The official web site of the OGC SensorThings API standard specification.
132 stars 28 forks source link

Batch Observations Use Case #163

Open doublebyte1 opened 11 months ago

doublebyte1 commented 11 months ago

I acknowledge that STA is designed for IoT devices, where we have streams of real-time data, but I was wondering if it could be used in the context of offline experiments? This is the case where an experiment (e.g.: a Thing) is run and the data is persisted in a storage device, like an hard-drive. Subsequently, the complete dataset (e.g.: all the observations) would be submitted to an STA server.

I am also unsure about how to use the data model, to best describe the case where the sensor is moving. From the discussion here: https://github.com/opengeospatial/sensorthings/issues/33 , I infer that the best approach would be to use the HistoricalLocation entity set. In the case of an offline experiment, the Location could be the most current one, i.e., the one from the last observation, while all the previous locations would be the in the set of HistoricalLocation. Would this be the correct approach?

If that is the case, is it possible to insert the complete dataset, i.e., as a payload of a POST request, containing an array of locations? Does anyone has any examples of a how such a payload would look like (e.g.: the actual JSON file)?

doublebyte1 commented 11 months ago

@pascallike

hylkevds commented 11 months ago

Creating entities is done one at a time, though Observations can be posted many-in-one-go using DataArrays. Those individual posts can be gathered into one batch operation using the Batch Processing extension for increased efficiency.

When dealing with moving things, the order of inserts is important when you use the automatically generated FeatureOfInterest. The FeatureOfInterest of an Observation is then generated from the currently set Location of the Thing. If you explicitly create the FeatureOfInterest when posting an Observation this doesn't matter.