ioBroker / ioBroker.influxdb

Store history data in InfluxDB (not for Windows)
MIT License
36 stars 25 forks source link
history influxdb iobroker storage time-series

Logo

ioBroker.influxdb

Number of Installations Number of Installations NPM version

Test and Release Translation status Downloads

This adapter saves state history into InfluxDB.

The Adapter supports InfluxDB 1.x and 2.x

This adapter uses Sentry libraries to automatically report exceptions and code errors to the developers. For more details and for information how to disable the error reporting see Sentry-Plugin Documentation! Sentry reporting is used starting with js-controller 3.0.

InfluxDB version support

InfluxDB 1.x

If you have an InfluxDB 1.x installation (preferably 1.8.x or 1.9.x) then you choose "1.x" in the adapter configuration and enter the Host-IP and Port together with Username and Password for the Access. You can also define a database name. The default is iobroker. On the first adapter start, this database is created.

When doing custom queries via the "query" message, you can use InfluxQL to select the data you want. FluxQL with InfluxDB 1.x is not supported (and will also not be added).

InfluxDB 2.x

Since 2.0 of the adapter also InfluxDB 2.x is supported which works a bit different. Here beside the Host-IP and Port the following data are required:

You can also define a database name - this is used as a Bucket. The default is iobroker. On the first adapter start, this bucket is created in the configured organization.

When doing custom queries via the "query" message, you can use Flux queries to select the data you want. Details on Flux can be found at https://docs.influxdata.com/influxdb/v2.0/reference/flux/

Store metadata information as tags instead of fields

For Influx 1.x state value, as well as associated metadata fields (q, ack and from) are stored as fields within InfluxDB. When using Flux-commands to retrieve this data (instead of InfluxQL) the data is returned in separate tables, which makes it more difficult to view the data in a joined way when using external database clients or Influx CLI query command. This is by design, as Influx only supports one field per data-point.

With Influx 2.x it is now also possible, to store this metadata-information besides the actual value of the state as Influx Tags. Tags are indexed and allow for faster search queries. In addition, they are closely linked to the measurement stored within the db, meaning they will be returned in one table with the queried measurement which makes them much easier to handle, when used outside this adapter. There are limitations with this, however:

Migration from InfluxDB 1 to 2

Please refer to the official guides on how to migrate from InfluxDB 1.x to 2.x. Especially the migration instructions for time series data have been verified to work during adapter testing. Please always create a backup of your data before performing the migration.

After the migration, the adapter is able to work with the old data (e.g. for history-queries) as well. This only applies if you don't decide to use Tags for storing metadata fields.

Retention Policy

While Influx 1.x supports the concept of multiple retention policies for one database, Influx 2 by design allows only one retention period per bucket. Therefore, it is only possible to set one policy for the whole database/bucket with this adapter via Default Settings -> Storage retention. The retention selected here will be applied on the fly and can be changed at any time. Retention policies set by the adapter will never be deleted, but instead altered if required, as otherwise Influx 1.x would delete all data that the policy applied to.

Please also read Understanding Retention Policies.

Direct writes or buffered writes?

With the default configuration, the adapter stores each single datapoint directly into the database and only uses the internal buffer if the database is not available. If the database was not available, the buffer is flushed at the given interval, so it can take the defined interval till the missing points are written!

By changing the configuration, it is possible to cache new data points up to a defined count or a defined maximum interval after which all points are stored into the database. This also gives better performance and less system load compared to writing the data points directly. InfluxDB has a limitation of the maximum size for writes which is at around 2MB. It should be safe to have up to 15.000 data points as buffer maximum, maybe also 20.000, but this highly depends on the length of your datapoint-IDs.

On exit of the adapter the buffer is stored on disk and reinitialized with the next adapter start, so no data points should be lost and will be written after the next start.

InfluxDB and data types

InfluxDB is very strict on data types. The datatype for a measurement value is defined with its first write. The adapter tries to write with the correct value, but if the datatype changes for the same state, then there may be the write errors in the InfluxDB. The adapter detects this and will write these potential conflicting data points always directly, but write errors mean that the value is not written into the DB at all. So make sure to check the logs for such cases.

In Version 1.x and 2.x of the adapter, it could also be that some values were converted wrong when no datatype was defined. E.g. a String like 37.5;foo bar was converted to a number 37.5 in older versions. The version 3 odf the adapter will detect that this is not a valid number and will not convert this value. This could lead to type conflicts after the update. Please check these values and think if you need to store it and how (for the future).

Additionally, InfluxDB does not support "null" values, so these are not written at all into the DB.

Installation of InfluxDB

Please refer to the official InfluxDB pages for installation instructions depending on your OS.

Setup authentication for InfluxDB 1.x (optional)

NOTE: Influx DB V2.x relies on organization/token login, instead of username/password! This is only applicable for InfluxDB 1.x

If you use DB locally, you may leave authentication disabled and skip this part.

Installation of Grafana (Charting Tool)

There is an additional charting tool for InfluxDB - Grafana. It must be installed additionally.

Install a current version of Grafana 3.x+ because InfluxDB support is enhanced there in comparison to earlier Grafana versions.

Under debian you can install it as described at http://docs.grafana.org/installation/debian/ . For ARM platforms you can check vor v3.x at https://github.com/fg2it/grafana-on-raspberry.

Explanation for other OS can be found here.

After the Grafana is installed, follow this to create a connection.

Default Settings

Most of these values can be pre-defined in the instance settings and are then pre-filled or used for the datapoint.

Access values from Javascript adapter

The sorted values can be accessed from Javascript adapter.

Possible options:

The first and last points will be calculated for aggregations, except aggregation none. If you manually request some aggregation, you should ignore first and last values, because they are calculated from values outside of the period.

When raw data are selected without using step, the returned fields are ts, val, ack, q and from. As soon as a step is used, the returned fields are ts and val.

Interpolated values will be marked as i=true, like: {i: true, val: 4.7384845, ts: 29892365723652}.

Please hold in mind that InfluxDB aggregates on "rounded time boundaries" (see https://docs.influxdata.com/influxdb/v0.11/troubleshooting/frequently_encountered_issues/#understanding-the-time-intervals-returned-from-group-by-time-queries)

InfluxDB is very strict when it comes to data types. This has effects for aggregator functions, e.g.:

Custom queries

The user can execute custom queries on data from javascript adapter.

The multi-query feature is also supported. You can send multiple queries separated by a semicolon.

That's why the result is always an array with one numbered index for each query.

Influx 1.x

Example with one query:

sendTo('influxdb.0', 'query', 'SELECT * FROM iobroker.global."system.adapter.admin.0.memRss" LIMIT 100', function (result) {
    if (result.error) {
        console.error(result.error);
    } else {
        // show result
         console.log('Rows: ' + JSON.stringify(result.result[0]));
    }
});

Two queries:

sendTo('influxdb.0', 'query', 'SELECT * FROM iobroker.global."system.adapter.admin.0.memRss" LIMIT 100; SELECT * FROM iobroker.global."system.adapter.admin.0.memHeapUsed" LIMIT 100', function (result) {
    if (result.error) {
        console.error(result.error);
    } else {
        // show result
        console.log('Rows First: ' + JSON.stringify(result.result[0]));
        console.log('Rows Second: ' + JSON.stringify(result.result[1]));
    }
});

NOTE: The values are coming back in the result array in filename "value" (instead of "val" as normal in ioBroker)

Influx 2.x

In InfluxDB v2.0 onwards, the SQL-based query language InfluxQL is deprecated in favour of Flux. For more information, please refer to the offical InfluxDB 2.0 documentation.

Example with one query:

sendTo('influxdb.0', 'query', 'from(bucket: "iobroker") |> range(start: -3h)', function (result) {
    if (result.error) {
        console.error(result.error);
    } else {
        // show result
         console.log('Rows: ' + JSON.stringify(result));
    }
});

Two queries: NOTE: By default, you cannot execute 2 queries at once via Flux-language, as there is no delimiter available. This adapter emulates this behaviour by defining ; as delimiter, so you can still run two queries in one statement.

sendTo('influxdb.0', 'query', 'from(bucket: "iobroker") |> range(start: -3h); from(bucket: "iobroker") |> range(start: -1h)" LIMIT 100', function (result) {
    if (result.error) {
        console.error(result.error);
    } else {
        // show result
        console.log('Rows First: ' + JSON.stringify(result.result[0])); // Values from last 3 hours
        console.log('Rows Second: ' + JSON.stringify(result.result[1])); // Values from last hour
    }
});

NOTE: The values are coming back in the result array in filename "value" (instead of "val" as normal in ioBroker)

storeState

If you want to write other data into the InfluxDB, you can use the build in system function storeState. This function can also be used to convert data from other History adapters like History or SQL.

A successful response does not mean that the data is really written out to the disk. It just means that they were processed.

The given ids are not checked against the ioBroker database and do not need to be set up or enabled there. If own IDs are used without settings, then the "rules" parameter is not supported and will result in an error. The default "Maximal number of stored in RAM values" is used for such IDs.

The Message can have one of the following three formats:

sendTo('influxdb.0', 'storeState', {
    id: 'mbus.0.counter.xxx',
    state: {ts: 1589458809352, val: 123, ack: false, from: 'system.adapter.whatever.0', ...}
}, result => console.log('added'));
sendTo('influxdb.0', 'storeState', {
    id: 'mbus.0.counter.xxx',
    state: [
      {ts: 1589458809352, val: 123, ack: false, from: 'system.adapter.whatever.0', ...}, 
      {ts: 1589458809353, val: 123, ack: false, from: 'system.adapter.whatever.0', ...}
    ]
}, result => console.log('added'));
sendTo('influxdb.0', 'storeState', [
    {id: 'mbus.0.counter.xxx', state: {ts: 1589458809352, val: 123, ack: false, from: 'system.adapter.whatever.0', ...}}, 
    {id: 'mbus.0.counter.yyy', state: {ts: 1589458809353, val: 123, ack: false, from: 'system.adapter.whatever.0', ...}}
], result => console.log('added'));

Additionally, you can add attribute rules: true in a message to activate all rules, like counter, changesOnly, de-bounce and so on

In case of errors, an array with all single error messages is returned and also a successCount to see how many entries were stored successfully.

delete state

If you want to delete entry from the Database, you can use the build in system function delete:

sendTo('influxdb.0', 'delete', [
    {id: 'mbus.0.counter.xxx', state: {ts: 1589458809352}}, 
    {id: 'mbus.0.counter.yyy', state: {ts: 1589458809353}}
], result => console.log('deleted'));

To delete ALL history data for some data point execute:

sendTo('influxdb.0', 'deleteAll', [
    {id: 'mbus.0.counter.xxx'}, 
    {id: 'mbus.0.counter.yyy'}
], result => console.log('deleted'));

To delete history data for some data point and for some range, execute:

sendTo('influxdb.0', 'deleteRange', [
    {id: 'mbus.0.counter.xxx', start: '2019-01-01T00:00:00.000Z', end: '2019-12-31T23:59:59.999'}, 
    {id: 'mbus.0.counter.yyy', start: 1589458809352, end: 1589458809353}
], result => console.log('deleted'));

Time could be ms since epoch or ans string, that could be converted by javascript Date object.

Values will be deleted including defined limits. ts >= start AND ts <= end

change state

If you want to change entry's value, quality or acknowledge flag in the database, you can use the build in system function update:

sendTo('influxdb.0', 'update', [
    {id: 'mbus.0.counter.xxx', state: {ts: 1589458809352, val: 15, ack: true, q: 0}}, 
    {id: 'mbus.0.counter.yyy', state: {ts: 1589458809353, val: 16, ack: true, q: 0}}
], result => console.log('deleted'));

ts is mandatory. At least one other flag must be included in a state object.

Flush Buffers

If you want to flush the buffers for one or all data points to the Database, you can use the build in system function flushBuffer:

sendTo('influxdb.0', 'flushBuffer', {id: 'mbus.0.counter.xxx'
, result => console.log('deleted, error: ' + result.error));

if no id is provided all buffers will be flushed.

History Logging Management via Javascript

The adapter supports enabling and disabling of history logging via JavaScript and also retrieving the list of enabled data points with their settings.

enable

The message requires having the id of the datapoint. Additionally, optional options to define the datapoint specific settings:

sendTo('influxdb.0', 'enableHistory', {
    id: 'system.adapter.influxdb.0.memRss',
    options: {
        changesOnly:  true,
        debounce:     0,
        retention:    31536000,
        maxLength:    3,
        changesMinDelta: 0.5,
        aliasId: ''
    }
}, function (result) {
    if (result.error) {
        console.log(result.error);
    }
    if (result.success) {
        // successfully enabled
    }
});

disable

The message requires having the id of the datapoint.

sendTo('influxdb.0', 'disableHistory', {
    id: 'system.adapter.influxdb.0.memRss',
}, function (result) {
    if (result.error) {
        console.log(result.error);
    }
    if (result.success) {
        // successfully enabled
    }
});

get List

The message has no parameters.

sendTo('influxdb.0', 'getEnabledDPs', {}, function (result) {
    // result is an object like:
    console.log(JSON.stringify({
        'system.adapter.influxdb.0.memRss': {
            changesOnly: true,
            debounce: 0,
            retention: 31536000,
            maxLength: 3,
            changesMinDelta: 0.5,
            enabled: true,
            changesRelogInterval: 0,
            aliasId: ''
        }
        /// ...
    }));
});

Changelog

4.0.3 (2024-05-16)

4.0.2 (2024-01-03)

3.2.0 (2022-09-19)

3.1.8 (2022-08-13)

3.1.7 (2022-06-27)

3.1.6 (2022-06-27)

3.1.5 (2022-06-12)

3.1.4 (2022-06-08)

3.1.3 (2022-06-01)

3.1.2 (2022-05-31)

3.1.0 (2022-05-27)

3.0.2 (2022-05-12)

3.0.1 (2022-05-11)

3.0.0 (2022-05-11)

2.6.3 (2022-03-07)

2.6.2 (2022-03-03)

2.6.1 (2022-02-28)

2.6.0 (2022-02-24)

2.5.2 (2022-02-22)

2.5.0 (2022-02-14)

2.4.0 (2021-12-19)

2.3.0 (2021-12-14)

2.2.0 (2021-08-25)

2.1.1 (2021-08-13)

1.9.5 (2021-04-19)

1.9.4 (2021-01-17)

1.9.3 (2020-11-07)

1.9.2 (2020-08-06)

1.9.1 (2020-07-22)

1.9.0 (2020-07-21)

1.8.8 (2020-07-18)

1.8.7 (2020-05-14)

1.8.6 (2020-05-11)

1.8.5 (2020-05-08)

1.8.4 (2020-05-02)

1.8.3 (2020-04-29)

1.8.2 (2020-04-19)

1.4.2 (2017-03-02)

1.3.4 (2017-02-22)

1.3.3 (2017-02-08)

1.3.2

1.3.1 (2017-01-16)

1.3.0 (2016-12-02)

1.2.1 (2016-11)

1.2.0 (2016-11-05)

1.1.1 (2016-11-03)

1.1.0 (2016-10-29)

1.0.1 (2016-10-18)

1.0.0 (2016-10-10)

0.5.3 (2016-09-30)

0.5.2 (2016-09-25)

0.5.1 (2016-09-20)

0.5.0 (2016-08-30)

0.4.0 (2016-08-27)

0.3.1 (2016-06-07)

0.3.1 (2016-06-05)

0.3.0 (2016-05-18)

0.2.0 (2016-04-30)

0.1.2 (2015-12-19)

0.1.1 (2015-12-19)

0.1.0 (2015-12-19)

0.0.2 (2015-12-14)

0.0.1 (2015-12-12)

License

The MIT License (MIT)

Copyright (c) 2015-2024 bluefox, apollon77

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.