Closed mman closed 1 month ago
I am able to reproduce it. Not sure yet how to fix it though.
@dirkjanfaber I have faced similar issue in Venus Influx Loader where TCP connection to InfluxDB may get interrupted and needs to be re-established if that happens. In Venus Influx Loader data collected in the meantime are cached and then flushed.
In this repo it is easier since we only need to reconnect periodically if the dbus connection fails.
In Venus Influx Loader this is done by invoking and re-scheduling the start
method linked here:
I can take a look next week...
Is there a solution to this in the meantime? After rebooting the Cerbo in, all my nodes are offline.
Yes, it was handled in this PR and already part of the pre-released version 1.5.7. Which will be part of Venus in the next beta release. After which I'll release it for the rest of the users too.
Meanwhile you can download the .tar.gz file from here too: https://github.com/victronenergy/node-red-contrib-victron/releases/tag/v1.5.17
Yes, it was handled in this PR and already part of the pre-released version 1.5.7. Which will be part of Venus in the next beta release. After which I'll release it for the rest of the users too.
Meanwhile you can download the .tar.gz file from here too: https://github.com/victronenergy/node-red-contrib-victron/releases/tag/v1.5.17
Great - thank you very much. Why do I only get "@victronenergy/node-red-contrib-victron" version 1.5.15 via Node-Red? Shouldn't that already be 1.5.16?
Yes, it should. And now I accidentally updated it to become 1.5.17 already. Which is no problem. In a few hours all of the node-red caches have been updated and you should be able to upgrade to 1.5.17 from node red.
Describe the bug
I am running Node Red on Raspberry PI and connecting to DBus on Cerbo GX by specifying environment variable
NODE_RED_DBUS_ADDRESS=<IP>:78
. This works excellent until Cerbo GX reboots due to a firmware update for example.To Reproduce Steps to reproduce the behavior:
NODE_RED_DBUS_ADDRESS=<IP>:78
where<IP>
is remote Cerbo.disconnected
state.Expected behavior Victron Nodes should become disconnected when Cerbo GX disappears, but probably should attempt to reconnect periodically with exponential backoff when going into that state.
Screenshots Normal functionality:
After Cerbo GX reboot:
Node RED system log confirms DBus is gone:
Additional context
The code responsible for handing error/disconnect from DBus seems to live here, but I am not sure what it does after it propagates the error up with
reject
.https://github.com/victronenergy/node-red-contrib-victron/blob/24953fc521ce648f60a4ce1c0a7190f5442954f1/src/services/dbus-listener.js#L114