Closed lolodomo closed 2 weeks ago
Here is the line triggering the update of thing to OFFLINE: https://github.com/openhab/openhab-addons/blob/283ae59bff2a6bef4872ecb8c046ea0dbc34edb2/bundles/org.openhab.binding.hue/src/main/java/org/openhab/binding/hue/internal/handler/HueBridgeHandler.java#L738
I guess it is normal for the bridge to indicate offline if the connection is lost.
The real question is what is causing the connection to be lost?
The real question is what is causing the connection to be lost?
Yes, that is the question,. I will add logs to the binding to understand in which case it happens. Maybe due to authentication....
There was a fairly major firmware update for v2 bridges on Aptil 25th, so maybe they pushed something for v1 at the same time..
https://www.philips-hue.com/en-gb/support/release-notes/bridge
After adding/updating some logs, looks like a timeout problem when polling the bridge:
2024-06-08 11:49:47.423 [WARN ] [ue.internal.handler.HueBridgeHandler] - Connection to Hue Bridge 192.168.xxx.xxx lost: Total timeout 5000 ms elapsed
org.openhab.core.i18n.CommunicationException: Total timeout 5000 ms elapsed
at org.openhab.binding.hue.internal.connection.HueBridge.doNetwork(HueBridge.java:1128) ~[bundleFile:?]
at org.openhab.binding.hue.internal.connection.HueBridge.doNetwork(HueBridge.java:1094) ~[bundleFile:?]
at org.openhab.binding.hue.internal.connection.HueBridge.get(HueBridge.java:1077) ~[bundleFile:?]
at org.openhab.binding.hue.internal.connection.HueBridge.getGroups(HueBridge.java:491) ~[bundleFile:?]
at org.openhab.binding.hue.internal.handler.HueBridgeHandler$2.updateGroups(HueBridgeHandler.java:312) ~[bundleFile:?]
at org.openhab.binding.hue.internal.handler.HueBridgeHandler$2.doConnectedRun(HueBridgeHandler.java:254) ~[bundleFile:?]
at org.openhab.binding.hue.internal.handler.HueBridgeHandler$PollingRunnable.run(HueBridgeHandler.java:138) [bundleFile:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) [?:?]
at java.util.concurrent.FutureTask.runAndReset(Unknown Source) [?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?]
at java.lang.Thread.run(Unknown Source) [?:?]
Maybe 5s is a little short as timeout ? I am going to try with a bigger timeout.
Maybe 5s is a little short as timeout ?
Yes this could indeed be the case. For example: if the Hue Bridge is running a Hue Labs automation such as 'Color Loop' then it responds much slower to HTTP requests.
Note however that the above problem may go away rather soon because Philips / Signify will remove Hue Labs from the bridges at the end of this month (if I recall correctly), and will have replaced it with newer built in Effects such as 'Prism' and/or other built in functions which perform faster.
@andrewfg I am using a bridge v1 so I don't think there will be any change from Philips / Signify in the future.
Trying with 8s as timeout for requests, I have not yet encountered any problem. Lets see if it is reliable after few hours.
It just happened again with 8s timeout.
@lolodomo I suppose that being a v1 hub, you are using API v1 rather than v2 .. if so, I am pretty sure there was nothing recently changed on the OH side that could have suddenly started your issue after it working for so long. However it seems also unlikely that Philips / Signify has changed the v1 hub firmware. The only thing that I can think of may be the lamp firmware, but this seems unlikely to cause an HTTP comms issue..
@lolodomo by the way, the API v2 specification states that the Bridge is limited on number of serial HTTP calls per second (as I recall about 20 per second), and number of parallel overlapping HTTP calls (as I recall about 2 or 3). And I suppose that a similar limitation would apply to API v1 resp. the v1 bridge. When I wrote the API v2 code I introduced a throttler to reduce HTTP calls to within the given serial/parallel limits. I did not back-port it to API v1 since I assumed if that if anyone would ever encounter HTTP limitations they could solve it by upgrading to API v2. Yet for your special case you could consider back-porting the throttler -- just a thought..
Problem is unchanged with 10s timeput.
As I have the sensor polling interval set to 0, I should not have concurrent requests, except of course if all requests run when polling lights update sometimes takes more than 10s. I mean maybe the next cycle starts while the previous one is not ended. In case it is the problem, I could try to increase the refresh interval to see if it helps.
In fact, there is a lock mechanism so concurrent requests when polling data is not possible.
One solution or rather a workaround could be to not set the bridge thing to OFFLINE when there is a request timeout during the data polling but rather do it if it happens during two consecutive polls.
I am testing a basic retry mechanism added for GET requests failing in timeout and it seems to work well. Will see if it is fully reliable after several hours.
In addition to the now implemented retry mechanism, I am currently testing with a timeout set to 7s (instead of 5s) to check if I have less timeouts.
In addition to the now implemented retry mechanism, I am currently testing with a timeout set to 7s (instead of 5s) to check if I have less timeouts.
After one week with timeout set to 7s, I can see that the stats are not better. So let's keep the timeout to 5s.
with timeout set to 7s, I can see that the stats are not better. So let's keep the timeout to 5s.
If I understand correctly, in your latest PR you do an immediate retry after 5s timeout, and this mostly succeeds. The total timeout for two calls is 2 x 5s = 10s, and increasing to 7s timeout (i.e. 14s) does not reduce the failure rate.
It would be interesting to discover if the timeout is actually caused a) by a socket connect timeout, or b) an HTTP response timeout. The former would indicate that the bridge is in a fundamentally non receptive state, whereas the latter would simply indicate that it is busy..
It would be interesting to discover if the timeout is actually caused a) by a socket connect timeout, or b) an HTTP response timeout. The former would indicate that the bridge is in a fundamentally non receptive state, whereas the latter would simply indicate that it is busy..
The first exception is always a HTTP response timeout. When the retry fails, it can happen that it is due to a socket timeout exception but most of the time it is again a HTTP response timeout.
It is already better now. The next step will be to buy a new bridge v2 ;)
I found in my logs that my bridge (v1) thing is going OFFLINE frequently, the reason is apparently a lost connection. When it happens, the bridge is going ONLINE 10s later. It happens even during the night when there is no activity in the house. The frequency is not regular.