Open proffalken opened 1 year ago
The devil is unfortunately in the details. I agree that the middleware is inconvenient but it silently fulfills more roles than just doing remotes writes to a database. The middleware (which is left up to the user) is also serving as a translation layer to a certain extent.
The schema for the decoded payloads is not standardized - the payload decoders can output any arbitrary JSON, which don't always translate in a clean manner to metrics:
There are also concerns about consistency - do you really want to graph temp
, temperature
and sensor_1.tempC
as different metrics, just because the sensors come from different vendors ? Should you have one unified unit instead of a tower of babel in general (Fahrenheit vs Celsius vs Kelvin) ?
Our answer to the above is payload normalization - under a normalized payload, it makes more sense to have such a standardized remote writer, but I don't see a good user experience for such a generic integration if the input is the decoded payload.
@adriansmares ok, that makes sense, thanks for the comprehensive reply.
Sounds like for now the solution is to continue with the MQTT->Prometheus bridge but keep an eye on the payload normalisation work for the future?
Happy for you to close this off if you think that's the best course of action.
Summary
Having the ability to send sensor data directly to an OpenTelemetry (OTEL) collector or Prometheus remote_write endpoint directly from the platform would remove the current need for an intermediate application and would enable direct integration with Grafana Cloud or similar.
Current Situation
At present, in order to get data into prometheus or another OTEL-focused solution, you need to create an application that either hangs off the back of AWS/Azure IoT or MQTT, consumes the data feed, and then forwards on the data to the appropriate data store/engine.
Moving to something like Prometheus Remote Write or the OTEL Collector's OTLP endpoint would remove the intermediate application and use well-established protocols and libraries to achieve the same goal.
Why do we need this? Who uses it, and when?
Many organisations use Grafana already for charting their IoT metrics, and I suspect (having written similar things in the past) that those organisations have some kind of in-house middleware that translates between MQTT and their data store of choice.
Where that data store is Prometheus or OTEL-based, being able to configure a target
remote_write
or OTLP endpoint and having the sensor data automatically stream to that location would remove significant toil from the organisation.OTEL has the added advantage of being compatible with AppDynamics, DataDog, NewRelic, and may other platforms, enabling organisations to view their sensor data on the same dashboards as the metrics from the applications that are consuming and processing the outputs from those sensors.
Proposed Implementation
An additional Integration is added to the platform, however it uses either Prometheus Remote Write or the OTEL Collector's OTLP endpoint to send the data from the AS to the remote endpoint.
It would require at minimum:
Contributing
Code of Conduct